i am new to apache Drill, I added the below code to drill-override.conf :
drill.exec {
security.user.auth {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
But it's giving an error while logging through Web UI saying -
username and password invalid
How can I assign my root user to Drill?
Don't forget to add PAM lib ( libjpam.so ) to say <jpamdir>
edit <drill_home>/conf/drill-env.sh add:
export DRILL_JAVA_LIB_PATH="-Djava.library.path=<jpamdir>"
export DRILLBIT_JAVA_OPTS="-Djava.library.path=<jpamdir>"
export DRILL_SHELL_JAVA_OPTS="-Djava.library.path=<jpamdir>"
and make changes to <drill_home>/conf/drill-override.conf:
drill.exec: {
cluster-id: "",
zk.connect: "",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms : ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
Run drill as root
sudo <drill_home>/bin/sqlline -u jdbc:drill:zk=local -n <user> -p <password>
Login drill web-ui/client with Linux user
Related
I am testing a mysql_database resource inside a docker_container.mysql resource in windows, and after executing terraform destroy the previously created mysql_database resource is not destroyed (still terraform shows in terrafrom.tfstate), probably because I did not tell terraform in main.tf that the mysql_database depends_on the docker_container.mysql resource
I already added the depends_on relationship, but terraform destroy does not work anyway and keeps saying
Error: Could not connect to server: dial tcp 127.0.0.1:3306: connectex: No connection could be made because the target machine actively refused it.
Here is main.tf:
provider "docker" {
host = "npipe:////.//pipe//docker_engine"
}
resource "docker_image" "mysql" {
name = "mysql:8"
//keep_locally = true
}
resource "docker_container" "mysql" {
name = "mysql"
image = docker_image.mysql.latest
restart = "always"
env = [
"MYSQL_ROOT_PASSWORD=root"
]
volumes {
volume_name = "mysql-vol"
container_path = "/var/lib/mysql"
}
ports {
internal = 3306
external = 3306
}
}
provider "mysql" {
endpoint = "127.0.0.1:3306"
username = "root"
password = "root"
}
resource "mysql_database" "test" {
name = "test"
depends_on = [docker_container.mysql]
}
And here is terraform.tfstate:
{
"version": 4,
"terraform_version": "0.12.29",
"serial": 37,
"lineage": "5f98996b-87be-947a-39fa-7cdc060fac79",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "mysql_database",
"name": "test",
"provider": "provider.mysql",
"instances": [
{
"schema_version": 0,
"attributes": {
"default_character_set": "utf8",
"default_collation": "utf8_general_ci",
"id": "test",
"name": "test"
}
}
]
}
]
}
What is the proper/safer action to do in cases like this one? Should I just manually remove the mysql_database resource from terraform.tfstate?
I try to use pm2 deploy but i don't find how to integrate user password in config file (https://pm2.keymetrics.io/docs/usage/deployment/):
{
"apps" : [{
"name" : "HTTP-API",
"script" : "http.js"
}],
"deploy" : {
// "production" is the environment name
"production" : {
"user" : "ubuntu",
"host" : ["xxxxxxxx"],
"ref" : "origin/master",
"repo" : "git#github.com:Username/repository.git",
"path" : "/var/www/my-repository",
"post-deploy" : "npm install; grunt dist"
},
}
}
I'm not able to run npm install on my server without sudo, according to this, how can i pass the password inside this config?
================= SOLUTIONS ===================
The only solution i found is to pass directly the password in my command line and read it with sudo -S :
production: {
key: "/home/xxxx/.ssh/xxx.pem",
user: "xxx",
host: ["xxxxxxxx"],
ssh_options: "StrictHostKeyChecking=no",
ref: "origin/xxxx",
repo: "xxxxx#bitbucket.org:xxxx/xxxxx.git",
path: "/home/xxxx/xxxxxx",
'pre-setup': "echo '## Pre-Setup'; echo 'MYPASS' |sudo -S bash setup.sh;",
'post-setup': "echo '## Post-Setup'",
'post-deploy': "echo 'MYPASS' |sudo -S bash start.sh; ",
}
As I understand, there is no option for ssh password in pm2 deploy, only keys.
in OpenShift 4.3, I'm trying to set env key from param value within a template. for example:
"env": [
{
"name: "${FOO}-TEST",
"value": "${BAR}"
},
{
"name: "TEST",
"value": "${BAR}"
}
]
"parameters": [
{
"name": "FOO",
"required": true
},
{
"name": "BAR",
"required": true
}
]
Then, oc new-app with -p FOO=X -p BAR=Y, and checking env vars on pod, it shows:
TEST=Y
But does not show:
X-TEST=Y
In template, can I not include a parameter value as env key?
I think you can set up a parameter value as env key.
Could you check the template is working well as you expected as follows ?
export the template as yaml file first.
$ oc get template <your template name> -o yaml > test-template.yml
check whether the parameter you specified is setting up or not from the output.
$ oc process -f test-template.yml -p FOO=X -p BAR=Y
It's my simple test result.
e.g.>
$ cat test-temp.yml
:
containers:
- env:
- name: "${NAME}-KEY"
value: ${NAME}
:
$ oc process -f test-temp.yml -p NAME=test
:
"containers": [
{
"env": [
{
"name": "test-KEY",
"value": "test"
}
],
:
I hope it help you.
Export your variables
oc process FOO=${FOO} BAR=${BAR} -f yamlFile
I am using PM2 for deployment / process management, and the application handles lots of DNS tasks, and so it's easiest if I run the development app from the remote server, and either Rsyncing or SFTPing on save (still sorting this out).
This being the case, it is the idea case for the dev app to be on the same VM as the production app. However, the structure of the PM2 deployment configuration file (ecosystem.config.js) doesn't seem to make this possible, as when I run pm2 deploy development, the development version overtakes the production process on the VM.
Here is what I have:
module.exports = {
apps: [
{
name: "APP NAME",
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};
Any thoughts for the best way to go about accomplishing this?
After referencing this PR, I'm thinking you should be able to add append_env_to_name: true as a property to the object in the apps array of the ecosystem.config.js:
So your updated ecosystem.config.js file would be as follows:
module.exports = {
apps: [
{
name: "APP NAME",
append_env_to_name: true // <===== add this line
script: "app.js",
env_development: {
NODE_ENV: "development",
...
},
env_production: {
NODE_ENV: "production",
...
}
}
],
deploy: {
production: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env production"
},
development: {
user: "user",
host: ["123.123.123.123"],
ref: "origin/master",
repo: "git#gitlab.com:me/repo.git",
path: "/var/www/app-dev",
"post-deploy":
"npm install && pm2 reload ecosystem.config.js --env development"
}
}
};
I created a AWS cloudformation, which creates a launch configuration and an autoscaling group. In the user Data in the launch Config I have configured the file system mount target, and I installed the cloudwatch agent:
Code EDITED
"LaunchConfig":{
"Type":"AWS::AutoScaling::LaunchConfiguration",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"files" : {
"/etc/cwlogs.cfg": {
"content": { "Fn::Join" : ["", [
"[general]",
"state_file = /var/awslogs/state/agent-state",
"[/var/log/syslog]",
"file = /tmp/",
"log_group_name = ecs-dataloader",
"log_stream_name = ECS-loader",
"datetime_format = %b %d %H:%M:%S"
]]},
"mode": "000755",
"owner": "root",
"group": "root"
},
"/etc/ecs/ecs.config": {
"content": { "Fn::Join" : ["", [
"ECS_CLUSTER=", { "Ref" : "ClusterName" }
]]},
"mode": "000755",
"owner": "root",
"group": "root"
}
},
"commands": {
"Update": {
"command": "yum -y update"
},
"InstallNfs":{
"command": "yum -y install nfs-utils"
},
"CreatFolder": {
"command": "mkdir -p /efs-mount-point/"
},
"EditPerms": {
"command": "chown ec2-user:ec2-user /efs-mount-point/"
},
"MountPoint": {
"command": { "Fn::Join" : ["", [
"AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)\n",
"echo LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0\n",
"$AZ.",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] },
".efs.",{ "Ref" : "AWS::Region" },".amazonaws.com:/ /efs-script-import-tmp nfs4 nfsvers=4.1 0 0 >> /etc/fstab"
]]}
},
"Mount": {
"command": "mount -a -t nfs4"
},
"CloudWatchAgent": {
"command": { "Fn::Join" : ["", [
"curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\n",
"python ./awslogs-agent-setup.py --region ",{"Ref" : "AWS::Region"},"\n",
"chmod +x ./awslogs-agent-setup.py ./awslogs-agent-setup.py -n -r",
{"Ref" : "AWS::Region"}," -c /etc/cwlogs.cfg"
]]}
}
},
"services" : {
"sysvinit" : {
"awslogs" : { "enabled" : "true", "ensureRunning" : "true" }
}
}
}
}
},
"Properties":{
"ImageId":{ "Fn::FindInMap":[ "AWSRegionToAMI", { "Ref":"AWS::Region" }, "AMIID" ] },
"SecurityGroups":[ { "Ref":"EcsSecurityGroup" } ],
"InstanceType": {"Ref":"InstanceType" },
"IamInstanceProfile":{ "Ref":"EC2InstanceProfile" },
"KeyName":{ "Fn::FindInMap" : [ "KeyPairMapping", {"Ref" : "EnvParam"}, "Key"] },
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n"
]]}
}
}
}
The image details : "eu-west-1": { "AMIID":"ami-ba346ec9" },
After running the template, the resources got created successfuly. So I connected to my instance that got created by the autoscaling group via SSH to see if the userData was properly run and set.
Unfortunately, After checking, this is what I found in the /etc/fstab file:
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
$ cat /etc/ecs/ecs.config
cat: /etc/ecs/ecs.config: No such file or directory
The instance is not connected to the file system, the file that I tried to create in the cloudformation::init /etc/cwlogs.cfg does not exist either (it's the cloudwatch agent config file) . Can any one tell me what is wrong in the user data that it didn't get executed?
I tried to check the log files but :
$ cat /var/log/cfn-init.log
cat: /var/log/cfn-init.log: No such file or directory
What is the problem here ?
EDIT
$ cat /var/log/cloud-init-ouput.log
...
Cloud-init v. 0.7.6 running 'modules:final' at Fri, 17 Feb 2017 11:43:42 +0000. Up 44.66 seconds.
+ yum install -y aws-cfn-bootstrap/opt/aws/bin/cfn-init -v --stack Mystack --resource LaunchConfig --region eu-west-1
Loading "priorities" plugin
Loading "update-motd" plugin
Config time: 0.009
Command line error: no such option: --stack
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Feb 17 11:43:43 cloud-init[2814]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 11:43:43 +0000. Datasource DataSourceEc2. Up 45.18 seconds
User Data log files are located at:
Linux cloud-init: /var/log/cloud-init.log
Windows EC2Config: C:\cfn\log\cloud-init.log
Check to see whether anything is in the log file. If not, then something's wrong with passing the User Data script from the template. (Why do you have the initial empty quotes in the Join?)
cfn-init is only installed by default on Amazon Linux AMI, so if you're using any other Image ID to launch your EC2 instance you need to ensure that it's installed correctly before invoking it. See my previous answer to the question, "Installing packages using apt-get in CloudFormation file" for more info.
Here is how I resolved the problem: I Update the cloud-init in the user data before calling the meta-data and instead of installing the cloudwatch agent in the metadata, I did in the userdata.
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"# Get the CloudWatch Logs agent\n",
"wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py\n",
"# Install the CloudWatch Logs agent\n",
"python ./awslogs-agent-setup.py -n -r ", { "Ref" : "AWS::Region" }, " -c /etc/cwlogs.cfg || error_exit 'Failed to run CloudWatch Logs agent setup'\n",
"service awslogs start"
]]}