I was checking the FIWARE Sanity Checks status portal and I was wondering if it is possible to execute a single test on a FIWARE Lab node.
The feature available on the FIWARE Sanity Checks Dashboard for node administrators to re-run Sanity Checks will always run all tests on the given node, but you can execute individual test by yourself in your own local environment.
The provided sanity_test script let you run all designed Sanity Checks on a given node or a set of them. This script runs all tests using the nosetests tool and, so far, you cannot specify which ones must be executed or not as parameters of this script. It is something that it is not implemented in the current version of the component.
If you want to run specific test, you will have to execute the tool manually (after setting the OpenStack credential variables in your environment for your Node and configuring all needed Sanity Check properties). For instance:
$ nosetests tests.regions.test_spain2:TestSuite.test_deploy_instance_with_new_network_and_e2e_connection_using_public_ip
The command above will execute only the test test_deploy_instance_with_new_network_and_e2e_connection_using_public_ip on Spain2 and will show by console if any error or exception is raised. You can use all available options given by nosetests to run a custom set of sanity tests with your own configuration (output formats and reports, logger, etc).
Related
How do I wait in using oc command for an operator package manifest to be available?
I am trying this
❯ oc wait --for=condition=ready packagemanifest/example-manifest -n openshift-marketplace
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
This is failing because there is no ready state under spec field of package manifest
This error may occur when the server is configured in a way that does not allow you to perform a specific action for a particular URL.
Check below possible solutions :
1)Kubectl resolves are driven by discovery. Please check you may have created two resources with conflicting names one of which is not listable.
2)Check that your Application Default Credentials are configured for a different user than your credentials.
3)Also make sure that your Application Credentials environment variable isn't pointing somewhere unexpected.
I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.
As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata
On GCE platform I have a Managed Instance Group for which a template is assigned.
The template has a startup script (bash) that pulls the latest code from a git repo and installs a few requirements.
Occasionally the startup script fails to update the code or fails to update the requirements.
In this case I would like to abort the startup.
What would be the correct way to go about this?
How do I signal the error?
Can I get the machine to restart or are there other options?
I want to install the FIWARE Sanity Checks in order to check my own FIWARE Lab node and I see that there are two subcomponents in the repository. Why are there two subcomponents on FIWARE Health repository? Which one should I install?
The FIWARE Health consists of two subcomponents:
a test execution engine which runs all Sanity Checks on each FIWARE Lab
node in order to validate its capabilities and get the 'Sanity
Status'.
and web dashboard to show all generated test reports and each node
status.
The execution engine is the FIHealth Sanity Checks subcomponent and the web dashboard is the FIHealth Dashboard subcomponent. The components are independently each other. You can install only the FIHealth Sanity Checks if you are not interested to publish your results in a web server (dashboard). Running FIHealth Sanity Checks you will get test results as xUnit and/or HTML. You can use optional scripts included in the repository to process these results and get the 'Sanity Status' of your node.
Of course, you can run all sanity tests only on your node (after setting in your environment the OpenStack credential variables), executing following script:
$ ./sanity_checks MyNodeName
To know more about these subcomponents and how can you install them, take a look at the FIHealth Sanity Checks and FIHealth Dashboard documentation.
I am working on my first SSIS package that connects to a SQL server. While I am developing it, I am connecting using Windows authentication which works fine since my Windows user name was added to the security of the database I am working on. Now, my IT department created a service account to deploy the package with. My question is, how can I change the user name/password of the connection before I deploy it? Is there a configuration file that the connection can read from? How can this be handled?
You actually have two security contexts here to be concerned with. The first is the account required to deploy the package or project you've created. The second is the account required to be able to execute the package you've created.
End-to-end Windows Authentication (deployment, execution and data sources)
The deployment account would need to have correct permissions to the server or filesystem on which it will reside. The execution account may be configured with a very different set of permissions, primarily related to the permissions required to execute whatever tasks you've built into the package.
In order to deploy the package under a different user that your own, it may be as simple as opening an application (like Command Prompt, Windows Explorer or SSIS Deployment Utility) as that user account and moving the package to the correct location. This can be handled on your workstation or the server.
For the execution account, you have options depending on how you're going to operationalize the execution of the package. Here's a few scenarios:
If you will have the package be executed by the SQL Server Agent and the account you need to execute the package with is the SQL Server Agent service account, you only need to create the job to run the package. Unless otherwise programmed, packages called from a SQL Agent job will run as the SQL Agent account
If you will have the package be executed through a SQL Server Agent job and the account you need to use for executing the package is NOT the SQL Agent service account, you can create an SSIS Proxy Account and specify that in the SSIS Package execution job step. After creating the credential inside SQL Server, it is as simple as changing a dropdown.
If you will be using command line execution from a SQL Agent job, the above two scenarios are still applicable.
If you will be using another mechanism (like Windows Scheduler or another Enterprise Scheduling tool) that uses a command line execution-like method, you should still be able to have that process "run as" the execution account.
Windows Authentication for deployment and execution only (SQL authentication for data)
The above details still apply for executing SSIS packages via jobs and/or command line, but you will need to pass the username and password to the connection manager at the time the package runs. You have several options to do this and should follow any established patterns or standard your organization has in place. SSIS has long supported using an XML-based .dtsConfig file which can be read into the package at run-time. There is a GUI within SSDT/BIDS that will lead you through the process of creating the file and telling it which package properties you want it to be able to configure.
A word of caution
Be careful when you're trying to SAVE sensitive information inside SSIS packages. There is a property named PackageProtectionLevel which can be set at the project and package level. By default, it is set to EncryptSensitiveWithUserKey. Now, don't let this property trick you into thinking the entire package is encrypted. It is not. This setting specifically refers to how SSIS will handle properties that are typed as sensitive. For example, the passwords in the connection managers are typed as sensitive information. SSIS will encrypt that field so that it doesn't store passwords in plain text. But it ONLY pertains to saving/storing the package. You can pass in plain text through a variable or configuration file that will be read into a sensitive field (like a password) at run-time.
If you need to be able to save a password with the package you've developed, I would recommend changing the PackageProtectionLevel to EncryptSensitiveWithPassword and setting it to something your team is able to remember or uses to protect other assets. Once that setting is in place, you will be able to check the "Save Password" box within the connection manager and have that go where ever the package goes. If you don't need to save that password with the package, change the property to DontSaveSentitive. Like I mentioned, you can still pass in credentials through configurations or other means, but it won't be stored INSIDE the package itself.