Im building a Ubuntu Live ISO with all Openstack services installed and configured for a single node setup(all services installed on same node).
To build this ISO I'm creating a chroot env. by unwrapping Live Ubuntu ISO.
While doing Openstack installation with puppet, puppet is unable to start services in the chroot env.
Eg:
/etc/init.d/mysql status #gives this O/P
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mysql status
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the status(8) utility, e.g. status mysql
service mysql status
shows no output.
Any pointers will be appreciated :)
I have found that puppet is indeed awkward in chroot jails because the gathering of facter facts does not work appropriately (or something in that vein).
To the problem at hand, though - you way want to make your service resources use an appropriate provider is Puppet doesn't select the right one, e.g.
service { "mysql": provider => upstart }
or even
Service { provider => upstart } # resource default at global scope
See the list of service providers. Note how Puppet is supposed to select the appropriate one on its own, so this may indeed be a problem with the chroot and how it skews facter's deliberations.
Related
I have a web (online calculator for an example) which developed by my fellow tem members. Now they want to deploy in PCF using manifests.
Languages used : python, php and javascipt.
I gone through the docs about pcf with manifest.yml
In that I don't have any idea about services and env.
What is that services and how can I find the services for the above project and also how can I find the environment variables?
And tell whether these fields are mandatory to run the project in PCF.
To your original question:
What is that services and how can I find the services for the above project and also how can I find the environment variables? And tell whether these fields are mandatory to run the project in pcf.
Does your app require any services to run? Services would be things like a database or message queue. If it does not, then you do not need to specify any services in your manifest. They are optional.
Similarly, for environment variables, you would only need to set them if they are required to configure your application. Otherwise, just omit that section of your manifest.
At the end of the day, you should talk with whomever developed the application or read the documentation they produce as that's the only way to know what services or environment variables are required.
In regards to your additional questions:
1)And also I have one more query...like in our application we used python ok! In that we use lots of pacakages say pandas,numpy,scipy and so on...how can I import all the libraries into the PCF ??? Buildpacks will contain version only right?
Correct. The buildpack only includes Python itself. Your dependencies either need to be installed or vendored. To do this for Python, you need to include a requirements.txt file. The buildpack will see this and use pip to install your dependencies.
See the docs for the Python buildpack which explains this in more detail: https://docs.cloudfoundry.org/buildpacks/python/index.html#pushing_apps
2)And also tell me what will be the path for my app name if Java I can enclose jar files
For Java apps, you need to push compiled code. That means, you need to run something like mvn package or gradle assemble to build your executable JAR or WAR file. This should be a self contained file that has everything necessary to run your app, compile class files, config, and all dependent JARs.
You then run cf push -p path/to/my-app.jar (or WAR, whatever you build). The cf cli will take everything in the app and push it up to Cloud Foundry where the Java buildpack will install things like the JVM and possibly Tomcat so you app can run.
what should I do for application devloped using pyhton , JavaScript and php....
You can use multiple buildpacks. See the instructions here.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
In short, you can have as many buildpacks as you want. The last buildpack in the list is special because that is the buildpack which will set the start command for your application (although you can override this with cf push -c if necessary). The non-final buildpacks will run and simply install dependencies.
3) we were using postgresql how can I use this in pcf with my app
Run cf marketplace and see if there are any Postgres providers in your Marketplace. If there is one, you can just do a cf create-service <provider> <plan> <service name> and the foundation will create a database for you to use. You would then run a cf bind-service <app> <service name> to bind the service you create to your app. This will generate credentials and pass them along to your app when it starts. You app can then read the credentials out of VCAP_SERVICES and use them to make connections to the database.
See here for more details:
https://docs.cloudfoundry.org/devguide/services/application-binding.html
https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#VCAP-SERVICES
I've found version 0.6.0 of the Operator Framework's Operator Lifecycle Manager (OLM) to be lacking and see that 0.12.0 is available with lots of new updates. How can I upgrade to that version?
Also, what do I need to consider regarding this upgrade? Will I lose any integrations, etc.
One thing that needs to be considered is that in OpenShift 3, OLM is running from a namespace called operator-lifecycle-manager. In future versions that becomes simply olm. Some things to consider,
Do you have operators running right now and if you make this change will your catalog names change? This will need to be reflected in your subscriptions.
Do you want to change any of the default install configuration?
Look into values.yaml to configure your OLM
Look into the yaml files in step 2 to and adjust if needed.
1) First, turn off OLM 0.6.0 or whatever version you might have.
You can delete that namespace, or as I did stopped the deployments within and scale the replicasets down to 0 pods which effectively turns OLM 0.6.0 off.
2) Install OLM 0.12.0
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/crds.yaml
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/olm.yaml
alt 2) If you'd rather just install the latest from the repo's master branch:
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yamlcrds.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
So now you have OLM 0.12.0 installed. You should be able to see in the logs that it picks up where 0.6.0 left off. You'll need to start learning about OperatorGroups though as that's new and going to start impacting your operation of operators pretty quick. The Cluster Console's ability to show your catalogs does seem to be lost, but you can still view that information through the commandline with oc get packagemanifests.
Just to give context:
I am planning to use Terraform to bring up new separate environments with ec2 machines, elb etc. and then maintaining configuration as well.
Doing that with terraform and using AWS provider sounds fairly simple.
Problem 1:
While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
Assuming the above is up and running:
Problem 2:
How do I deploy new code on the servers in this environment launched by Terraform?
Should I use for eg. ansible playbooks/chef recipes/puppet manifests for that? or Terraform gives some other options/ways?
Brief answers:
Problem 1: While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
A couple of options:
Create an AMI of your instance with the installed packages and specify that in the resource.
Use the user data script to install the packages that you need when the instance starts.
Use ansible playbooks/chef recipes/puppet to install packages once the instance is running (e.g. creating an opsworks stack with terraform)
Problem 2: How do I deploy new code on the servers in this environment
launched by Terraform? Should I use for eg. ansible playbooks/chef
recipes/puppet manifests for that? or Terraform gives some other
options/ways?
Not the intended use case for terraform, use other tools like jenkins or aws services like codepipeline or codedeploy. Ansible/chef/puppet can also help (e.g. with opsworks)
On my ubuntu laptop I was issuing some kubectl commands including running kubernetes from a local Docker container all was well ... at some point I then issued this command
kubectl config set-cluster test-doc --server=https://104.196.108.118
now my local kubectl fails to execute ... looks like the Server side needs to get reset back to default
kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I deleted and reinstalled the gcloud SDK binaries and ran
mv ~/.config/gcloud ~/.config/gcloud~ignore
gcloud init
gcloud components update kubectl
How do I delete my local kubectl settings (on ubuntu 16.04) and start fresh ?
It's important to note that you've set a kubeconfig setting for your client. When you run kubectl version, you're getting the version for client and the server which in your case seems to be the issue with the version command.
Updating your config
You need to update the setting to the appropriate information. You can use the same command you used to set the server to change it to the correct server.
If you want to wipe the slate clean in terms of client config, you should remove the kubeconfig file(s). In my experience with the gcloud setup, this is just ~/.kube/config.
If you are running the cluster through google cloud engine, you can use gcloud to get the kubeconfig set for you as per the container engine quick start guide. The following assumes that you have defaults for the project, zone, and cluster set.
gcloud container clusters get-credentials CLUSTER_NAME
Removing kubectl - this isn't necessary
If your goal is to wholesale get rid of kubectl, you should remove the component rather than reseting gcloud.
gcloud components remove kubectl
But that won't solve your problem as it doesn't remove or reset ~/.kube/config when I run it on Mac and if you want to keep working with it, you'll need to reinstall kubectl.
I am migrating from DotCloud to Elastic Beanstalk.
Using DotCloud, they clearly explained how to set up Python Worker, and how to use supervisord.
Moving to Elastic Beanstalk, I am lost on how I could do that.
I have a script myworker.py and want to make sure it is always running. How?
Elastic Beanstalk is just a stack configuration tools over EC2, ELB and autoscaling.
One approach you can use, is create your own AMI, but since October last year, there is another approach that probably will be more suitable for your needs: ebextensions.
.ebextension is just a directory in your application, that get's detected once your application has been loaded by AWS.
Here is the full documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
With Amazon Linux 2 you need to use the .platform folder to supply elastic beanstalk with installation scripts.
We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.
So you should add a prebuild hook (example) into a .platform folder to install supervisor and a postdeploy hook (example) to restart supervisor after each deployment.
There is an ini file (example) used in the script; which is made for laravel specific.
Make sure that the .sh files from the .platform folder are executable before deploying your project:
$ chmod +x .platform/hooks/prebuild/*.sh
$ chmod +x .platform/hooks/postdeploy/*.sh