Can I modify the ownership for a shared folder in vagrant? - acl

I use vagrant and chef to develop my own blog in a virtual machine. To have easy access to the wordpress folder I created a shared folder.
Basically the wordpress folder is on my host and gets mounted as shared folder in /var/www/wordpress in the VM. The configuration is similar to:
config.vm.share_folder "foo", "/guest/path", "/host/path"
My problem is that the ownership in my VM is always vagrant:vagrant even if I change it on my host. Ownership changes in the VM get ignored.
I cannot use chown to set the ownership of the upload directory to www-data:www-data.
It is possible to use chmod and change the access restrictions to 777, but this is a really ugly hack.
Here is what I actually want. Is this possible?:
Development: Access to the shared folder from my host.
Access Restriction: On the VM all files and folders should have proper and secure ownership and access restrictions.

As #StephenKing suggests you can change the options of the whole directory.
The relevant function is not documented but the source tells us:
# File 'lib/vagrant/config/vm.rb', line 53
def share_folder(name, guestpath, hostpath, opts=nil)
#shared_folders[name] = {
:guestpath => guestpath.to_s,
:hostpath => hostpath.to_s,
:create => false,
:owner => nil,
:group => nil,
:nfs => false,
:transient => false,
:extra => nil
}.merge(opts || {})
end
Basically you can set group, owner and acl for the whole folder which is way better than setting everything to world writable on the host. I have not found any method to change the ownership of a nested directory.
Here is a quickfix:
config.vm.share_folder "v-wordpress", "/var/www/wordpress", "/host/path", :owner => "www-data", :group => "www-data"

#john-syrinek
in 1.2+
config.vm.synced_folder "src/", "/srv/website",
owner: "root", group: "root"
http://docs.vagrantup.com/v2/synced-folders/basic_usage.html

You can allow changing the ownership inside the guest:
config.vm.share_folder "foo", "/guest/path", "/host/path", {:extra => 'dmode=777,fmode=777'}

As the other answers have pointed out you should probably set the correct owner and group using the owner and group configuration options.
However, sometimes that won't work (for example when the target user is only created later on during provision). In these cases, you can remount the share:
sudo mount -t vboxsf -o uid=`id -u www-data`,gid=`id -g www-data` /path/to/share /path/to/share

Following up on #StephenKing and #aycokoster awesome tips, I had a use-case for mounting another directory read-only.
I added
config.vm.share_folder "foo", "/guest/path", "/host/path", :extra => 'ro'
and
# discard exit status because chown `id -u vagrant`:`id -g vagrant` /host/path is okay
vagrant up || true

Related

Using logstash Elasticsearch output plugin error: NameError: SSLConnectionSocketFactory not found

I am trying to import data from MySQL to Elasticsearch using logstash version.
Versions of software used:
Java/JRE 1.8
Elasticsearch 6.1.0
Logstash 6.1.0
My conf contents are as follows:
file: simple-out.conf
input {
jdbc {
# MySQL jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://valid/validDBNAME?useSSL=false"
# The user we wish to execute our statement as
jdbc_user => "MY USER"
jdbc_password => "MY PWD"
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:\JavaDevelopment\TomcatServer\apache-tomcat-8.5.20\lib\mysql-connector-java-5.1.45-bin.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * from testtable"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "http://localhost:9200"
index => "test-migrate"
document_type => "data"
}
}
When I run logstash I get the following error:
[2017-12-19T16:50:08,055][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<NameError: SSLConnectionSocketFactory not found ERROR
Please suggest how to get past this.
Thanks
Try adding httpclient-VERSION.jar, httpcore-VERSION.jar files to LOGSTASH_HOME/vendor/jruby/lib/ folder.
I had same problem, the solution is:
add LOGSTASH_HOME variable path in environment variables in mycomputer properities to C:\some_path\logstash-6.3.0
then add these files to C:\some_path\logstash-6.3.0\logstash-core\lib\jars
files:
httpclient-4.5.2.jar
log4j-1.2.17.jar
httpcore-VERSION.jar
this is because logstash doesn't come with ssl jar file and you need to add this to jar folder and set the environment so as logstash can read those jar files
and try using gitbash rather than cmd or powershell!
and try to close and restart your logstash and elasticsearch
follow this link to see which index are comming to elasticsearch.
http://localhost:9200/_cat/indices?v
if you are running on diff. port please change port number
addidional tips
and it didn't work for me on jdk 10.x so use jdk 8.x
and maybe installer wont work even so just unzip file and run elasticsearch.bat and logstash -f cong_filepath\configfile.config in bin folders
windows user can type cmd in top of path
Please make sure MySQL is running, (doesn't mean open) on server
Learn # https://www.elastic.co/blog/logstash-jdbc-input-plugin
To resolve it yourself in future:
open logstash/bin folder and right click and gitbash (need git)
type "vi log" and press tab 3 times till all logstahsh files are shown up
a new line will appear, type "vi logstash" and press enter to open source file
you'll have code to understand flow,.. if you look carefully Logstash_home and javacmd (at bottom) are important that execute jar, javacmd should be set to your java_home variable in env variables to jdk8.x and Logstash_home has a path for jar directory, either change it or put you jar to this folder.
Hope this helps!

Error starting Apache Drill in Embedded Mode on Windows 10

I am trying to start Apache Drill 1.10 in Embedded Mode on Windows 10 x64 (with Oracle JVM 1.8.0_131). When launching the command
sqlline.bat -u "jdbc:drill:zk=local"
I get the following:
Error during udf area creation [/C:/Users/<user>/drill/udf/registry] on file system [file:///] (state=,code=0)
So, after some googling, I have changed the drill-override.conf file this way:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
udf: {
# number of retry attempts to update remote function registry
# if registry version was changed during update
retry-attempts: 10,
directory: {
# Override this property if custom file system should be used to create remote directories
# instead of default taken from Hadoop configuration
fs: "file:///",
# Set this property if custom absolute root should be used for remote directories
root: "/c:/work"
}
}
}
Then I have checked the following:
proper permission set on the folder
console started as an Administrator
But I still get the same error:
Error during udf area creation [/c:/work/drill/udf/registry] on file system [file:///] (state=,code=0)
I can't disable UDF since I don't have an active connection.
Any suggestions?
Seems to be related to ownership of the folders, as per this link.
Details of the solution from the link are quoted as follows
Run these commands before the first time you are running sqlline.bat.
mkdir %userprofile%\drill
mkdir %userprofile%\drill\udf
mkdir %userprofile%\drill\udf\registry
mkdir %userprofile%\drill\udf\tmp
mkdir %userprofile%\drill\udf\staging
takeown /R /F %userprofile%\drill

Access denied (remote-user and database user are different) on drush sql-sync

With Drush v8.0.3 in both sides (local is a Mac and desarrollo is a Linux Server with CPanel), I'm executing:
sql-sync #project.local #project.desarrollo
and getting this error at the end of the process (everything else works):
ERROR 1045 (28000): Access denied for user 'myremotehostuser'#'localhost' (using password: YES)
My alias file includes this:
$aliases['desarrollo'] = array (
'root' => '/home/myremotehostuser/subdomains/project/',
'uri' => 'http://project.myremotehost.com',
'remote-user' => 'myremotehostuser',
'remote-host' => 'myremotehost.com',
'path-aliases' => array(
'%dump-dir' => '/tmp',
),
'source-command-specific' => array (
'sql-sync' => array (
'no-cache' => TRUE,
'structure-tables-key' => 'common',
),
),
'command-specific' => array (
'sql-sync' => array (
'sanitize' => TRUE,
'no-ordered-dump' => TRUE,
'structure-tables' => array(
'common' => array('cache', 'cache_filter', 'cache_menu', 'cache_page', 'history', 'sessions', 'watchdog'),
),
),
),
);
And drush status on "desarrollo" returns:
Drupal version : 7.41
Site URI : http://project.myremotehost.com
Database driver : mysql
Database hostname : localhost
Database port :
Database username : databaseuser
Database name : databasename
PHP configuration : /usr/local/lib/php.ini
PHP OS : Linux
Drush script : /usr/local/bin/drush
Drush version : 8.0.3
Drush temp directory : /tmp
Drush configuration :
Drush alias files :
Drupal root : /home/myremotehostuser/subdomains/project/
Drupal Settings File : sites/default/settings.php
Site path : sites/default
As you can see, the "database owner" (a MySQL user) is not the same as "remote-host" (a CPanel user).
Drush is supossed to scan the setting files in both sides to figure out the right config, if I'm not wrong, you don't even have to add a $databases array or a db-url string since v7. Then, why is it trying to access the db with 'myremotehostuser'#'localhost' instead of 'databaseuser'#'localhost'?
In this case, there is no database user called 'myremotehostuser' and I guess that I could solve the problem by creating it and granting permissions for 'databasename' but I'm almost sure I'm missing something really dumb here and there must be a simple solution.
edit:
Trying the same alias file in the same server with host user name and database user name still being different, but this time in a freshly created account, seems to work prefectly. So I guess the problem it's being caused by some CPanel/WHM configuration issue rather than any Drush related thing. I will keep trying this a couple of times before closing the question.
I encountered this issue too. It turns out drush is first looking for the file ~/.my.cnf and if it's found, it'll use it to connect. The file contains the username and password.
[client]
user=myremotehostuser
Once I temporarily remove this file, drush can dump the sql successfully using the site's settings.php.
Repeating the rsync and sql-sync processes in new accounts work without problem. The issue only happens in this already created account, with multiple databases, users, and prefixes. I don't have full access to this account so I can't test it properly.
This issue was too specific and probably didn't deserve a question by itself since it's almost impossible to replicate. I'll consider it "solved" and will update it in the case I find out why it's not working.

How to change database of yii2 advanced template

How can i change the database information of my yii2 advanced template?
i cant find the database settings.
http://www.yiiframework.com/doc-2.0/guide-index.html
In /common/config/main-local.php you set your database settings:
'components' => [
'db' => [
'class' => 'yii\db\Connection',
'dsn' => 'mysql:host=localhost;dbname=DATABASE_NAME',
'username' => 'DATABASE_USER',
'password' => 'DATABASE_PASSWORD',
'charset' => 'utf8',
],
The installation guide for advanced template is here: https://github.com/yiisoft/yii2-app-advanced/blob/master/docs/guide/start-installation.md
The advanced template has environments that each define the target specific configuration. Basically after cloning the template you need to make sure you setup the files under the environments-folder correctly (it comes with dev and prod predefined configurations - for development and production environments).
In the config subfolders you'll find the *-local.php files that indicate configuration specific to that environment.
For the database you have to look in common/config/main-local.php.
After you're done with that, just navigate back to the templates' root folder and run ./init. It will ask you which environment you want and put the files in place. Switching to another environment is just an ./init call away.
Obviously you're not obligated to keep on using the environments if you don't have use for it, you might as well modify the /common/config/main.php file and add the connection info there. But given that the advanced template assumes multiple deployment stages for your application it is a very good setup.

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.