In my local OpenShift instance based on CodeReady containers ( CRC ) I noticed that deleting a namespace is not a trivial task. Some namespaces will remain in "terminating" state indefinitely.
Using the command line I can try to force deletion of the namespace but this will only cause the oc client also only to start waiting indefinitely for deletion as shown below.
user#localhost:~$ oc delete namespace nodejs-helloworld-staging --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "nodejs-helloworld-staging" force deleted
Stop and start of the cluster does not cause OpenShift the proceed with the deletion of the namespace.
What is the correct way to delete namespaces completely? Is this supported by OpenShift? Or is this practice recommended against. Are there other approaches to make a namespace available? Is it for example possible / better / recommended to rename the namespace?
Now that I have a stuck "terminating" namespace, is it possible to get rid of it somehow? Is there a even more "forceful" way to "force" delete my namespace?
As a side question, why is namespace / project deletion such a difficult task to perform for OpenShift? Why is not trivial? Should it not be a trivial task.
It's not recommended to force the deletion of a namespace (project on Openshift).
If you cannot gracefully delete the namespace, you can search for what OpenShift is stuck trying to delete in this namespace.
For example, use oc get events to get the OpenShift events and, after this, solve the problems which might appear.
Some resources might stop a namespace deletion, for example, a PVC. So you need to discover what is causing the problem when deleting the namespace. Also, prefer to use: oc delete project <PROJECT NAME>. You can also add a --wait=true flag.
In OpenShit,"namepsaces"are linked to"projects"usually. To delete a namespace, delete theprojectwith the same name
As for your problem of anamespacestaying forever in theterminatingstate, there is a great chance that a dependent object is stuck somewhere being deleted by kubernetes
If you are sure of what you are doing, you can edit the"project"manifest and remove thespec.finalizersstanza. That will stop the"cascade delete" action
Related
I just want to create a simple Tekton pipeline on OpenShift (v4) using the Tekton operator.
My pipeline consists in the following operations:
fetch a given git repository,
build a nodejs application with the s2i-nodejs Tekton task,
copy the resulting image from the internal OpenShift registry into an external registry.
Implementing the first two steps is without any problem, implementing the third one is incredibly complicated without expert guidance.
Which tool do I need to use to copy my resulting container image (skopeo, crane, etc)?
How do I deal with the credentials (at the CLI, in an authfile, etc)?
Do I need to use a dedicated service account (default pipeline sa is not recommended)?
Is there an example somewhere that might help me?
Which tool do I need
Skopeo would do fine
How do I deal with the credentials
However you want. Secret, env vars, generating a config or setting those as arguments to skopeo, ...
Do I need to use a dedicated service account
Probably not. Serviceaccount should have image-puller/image-builder privileges.
Is there an example somewhere that might help me?
Have you looked at tekton catalog?
My site is a bit more of a static site. The site is based on word-press now, and I am thinking of using auto scale feature.
The problem is that I am not good at startup scripts like python, java, etc...
I am more comfortable with bash script.
Is there a way create a snapshot of a production compute-engine and use it as a template instead of instance group without startup script complexity?
I have two instances, one is an individual instance and one is an inside instance group for auto scale. Whenever there is a update in my site, I have to change it in individual instance and move the snapshot disk as template in instance group and everything will be updated.
My question is, is that workable or do I really have to work on startup script?
Any friendly advice will be highly appreciated.
Some bash skills should be enough to write a startup script and not use the additional instance and image creation at all. See documentation for an easy example of that - just put in all bash commands that you use to prepare that instance yourself. This should be relatively easy and allows for easy modification of that process later on.
If you really want to avoid writing the script, what you’ve described should be possible: take an instance that has everything installed as you like it, then delete it keeping the disk and create an image out of that disk.
One minor improvement: you can use an instance from the existing instance group by abandoning it.
Couchbase CLI comes with the cbbackup and cbrestore commands which I had hoped would allow me to take a database in a known state and back it up and then restore it somewhere else where only a newly installed instance exists. Unfortunately it appears that the target database must have all the right buckets setup already and (possibly) that the restore command requires that each bucket name be mentioned explicitly.
This wouldn't pose too much of a problem if I were hand-holding the process but the goal is to start a new environment in a fully automated fashion and I'm wondering if someone has a working method of achieving this goal.
If it where me, I'd use the CLI, REST API or one of the Couchbase SDKs to write something to automate the creation of the target bucket then do the restore.
REST API:
http://docs.couchbase.com/couchbase-manual-2.5/cb-rest-api/#creating-and-editing-buckets
CLI:
http://docs.couchbase.com/couchbase-manual-2.5/cb-cli/#couchbase-cli-commands
Another option you might look into is to use these same kinds of methods to automate set up of uni-directional XDCR from the source to the target cluster.
how to see my liferay table in mysql database?
i have created portal-ext.properties in liferay home.but i cant see my liferey table mySql..
table is created in docroot/web-inf/sql in eclipse IDE...
help me where i m wrong and which thing missing?
#
# MySQL
#
include-and-override=portal-ext.properties
include-and-override=${liferay.home}/portal-ext.properties
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost/kportal?useUnicode=true&characterEn
coding=UTF-8&useFastDateParsing=false
jdbc.default.username=root
jdbc.default.password=ubuntu123
schema.run.enabled=true
schema.run.minimal=true
Tables created through service-builder will only be created once you deploy your plugin to the actual server (and run the server), not on build time. Also, your plugin needs to deploy correctly - if initialization fails (e.g. due to missing pieces or dependencies that are not met), the tables will not be created.
Also, the tables by default will be named with the namespace you gave as a prefix. So if you declare a namespace X (in service.xml) and an entity named Y, the table to look for will be named X_Y.
Also, remember you'll have to run ant build-services after you edited your service.xml. Then deploy it and wait for a log message similar "...your plugin... is available for use" to be sure it deployed correctly. (Edit: This is no longer printed for portlets, only for the other plugin types, so you might not see it when you deployed your plugin)
If this doesn't help, please give more information. Currently you don't give any details about what you've actually tried. You'll find more steps and details on the development guide.
Also, make sure
that the account you use for the database has CREATE TABLE permissions (you use root in your configuration above - that should do the trick)
that you're checking the correct database in case you have multiple
that Liferay actually picks up your configuration file. The startup log will tell you which portal-ext.properties files are read, as well as which database it will use. In case you can't find the name/location of your portal-ext.properties file, make sure that you indeed have a file with this name. A common problem on windows is that people create portal-ext.properties.txt (and Windows hides the .txt part of the name)
I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".