Keyrock Installation - fiware

I was following the manual installation steps provided here:
https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Identity_Management_-KeyRock-_Installation_and_Administration_Guide
We do not know what to do in the step "4. Initial Sample Data" of keystone, since it says that we should use the automatic installation tools if we plan to use keystone with Fiware Identity Management. Then, Can we install Keyrock manually? or we have to do it through the automatic tools in order to use keystone with Fiware.
Thanks in advance,
Rafa.

Its hard to give you an answer with out knowing your specific use case so I will try to give a broad explanation.
Yes, you can install it manually if that's what you want. The "Initial Sample Data" step depends on how do you want to use the Identity Manager (or its back-end part based on Keystone). The sample data is simply some fake data in the database so you can right away demo or test the Identity Manager. That said, the installation instructions are not very clear in explaining that there is some "required data" and some "testing data", so I will try to explain it better here (and update the wiki afterwards :) )
If you ONLY want our modified version of Keystone (you plan to use it as the Keystone component of an OpenStack deployment) you don't need to bother with this "sample data" at all. You will need to create users for the services, roles, projects, services, endpoints etc. as a normal Keystone installation. For creating all of this you have three options: create it by hand, use the sample_data.sh script provided by Keystone (as hinted in the wiki) as a starting point or use the keystone.populate in the automated tools (which you can modify to suit your needs).
If you plan to use the whole Identity Manager component (Keystone back-end + Horizon front-end) then you still need this "required data" but the Keystone provided sample_data.sh is not anymore a valid starting point, you should do it either by hand or using keystone.populate. Additionally, you can create some "sample data" to test out the IdM right away with some users, organizations and applications. You can create this either by hand or using the automated tools' task keystone.test_data.
I will rewrite that section on the wiki to better reflect this options and to also add a list with all the "required data" for each situation. Hopefully this answer wasn't too confusing.

Related

Should I use a MarketPlace action instead of a plain bash `cp` command to copy files?

I am noticing there are many actions in the GitHub marketplace that do the same. Here is an example:
https://github.com/marketplace/actions/copy-file
Is there any benefit of using the GitHub marketplace action instead of plain bash commands? Do we have recommended practices guideline that helps to decide whether I use MarketPlace actions versus plain bash or command line
These actions don't seem to have any real value in my eyes...
Other than that, these run in docker and don't need cp, wget or curl to be available on the host, and they ensure a consistent version of their tools is used. If you're lucky these actions also run consistently the same way on Windows, Linux and Mac, where as your bash scripts may not run on Windows. But the action author would have to ensure this, it's not something that comes by default.
One thing that could be a reason to use these actions from the marketplace is that they can run as a post-step, which the run: script/bash/pwsh steps can't.
They aren't more stable or safer, unless you pin the actions on a commit-hash or fork it, the owner of the action can change the behavior of the action at any time. So, you are putting trust in the original author.
Many actions provide convenience functions, like better logging or output variables or the ability to safely pass in a credential, but these tasks seem to be more of an exercise in building an action by the author and they don't really serve a great purpose.
The documentation that comes with each of these actions, doesn't provide a clear reason to use these actions, the actions don't follow the preferred versioning scheme... I'd not use these.
So, when would you use an action from the marketplace...? In general actions, like certain cli's provide a specific purpose and an action should contain all the things it needs to run.
An action could contain a complex set of steps, ensure proper handling of arguments, issue special logging commands to make the output more human-readable or update the environment for tasks running further down in the workflow.
An action that adds this extra functionality on top of existing cli's makes it easier to pass data from one action to another or even from one job to another.
An action is also easier to re-use across repositories, so if you're using the same scripts in multiple repos, you could wrap them in an action and easily reference them from that one place instead of duplicating the script in each action workflow or adding the script to each repository.
GitHub provides little guidance on when to use an action or when an author should publish an action to the marketplace or not. Basically, anyone can publish anything to the marketplace that fulfills the minimum metadata requirements for the marketplace.
GitHub does provide guidance on versioning for authors, good actions should create tags that a user can pin to. Authors should practice semantic versioning to prevent accidentally breaking their users. Actions that specify a branch like main or master in their docs are suspect in my eyes and I wouldn't us them, their implementation could change from under you at any time.
As a consumer of any action, you should be aware of the security implications of using any actions. Other than that, the author has 2FA enabled on their account, GitHub does little to no verification on any actions they don't own themselves. Any author could in theory replace their implementation with ransomware or a bitcoin miner. So, for actions you haven't built a trust relation with its author, it's recommended to fork the action to your own account or organization and that you inspect the contents prior to running them on your runner, especially if that's a private runner with access to protected environments. My colleague Rob Bos has researched this topic deeply and has spoken about this topic frequently on conferences, podcasts and live streams.

set DEPOT_TOOLS_WIN_TOOLCHAIN=0 if you are not a googler

Can someone explain how being a "googler" or not affects how an open source package builds or not?
When attempting to build v8 the build docs state
"If you are a non-googler you need to set DEPOT_TOOLS_WIN_TOOLCHAIN=0"
When I set DEPOT_TOOLS_WIN_TOOLCHAIN to 0 as a "non googler" the build cuts short.
When I set DEPOT_TOOLS_WIN_TOOLCHAIN to 1 as a "googler" the build doesn't cut short but errors out later on in a way that points to requiring a specific hash value on the build system.
When inquiring about the error on the googlegroup v8-users an employee of google stated:
"It wouldn't enter this code if the environment variable I mentioned
was set correctly. If you do enter this code it's not set. And it is
expected to fail"
Which means the build is expected to fail for "non googlers".
He goes on to say that the build platform I'm on is not supported (non googler, no hash value...) yet that "it should compile at least".
?
Can someone explain how "it should compile at least" ?
If you are a "non googler" do you use another build script and build tools ? Possibly get the source otherwise and use different parameters ? Do you even attempt to build the package at all (in the sense that "non googlers" are not meant to build the package)?
If anyone has some experience here it would be helpful as it would save a lot of time and trouble for people trying to build packages with
set DEPOT_TOOLS_WIN_TOOLCHAIN=0 if you are not a googler
Thank you.
You should certainly be able to build V8. You do not need access to any special infrastructure or tooling. There are many V8 committers that are not Google employees.
That particular environment variable DEPOT_TOOLS_WIN_TOOLCHAIN is different for Google employees because of licensing reasons (distributing Microsoft toolchain via depot_tools), but you can build V8 with and without that variable.

Open source solution for enterprise configuration management with ui/dashboard

Working on the enterprise application that has multiple components/services. Instead of storing configuration for each component/service per environment.
Looking for something which allow me to store configuration in hierarchy such as through interactive UI.
Application-1 -> Component-1 -> Env -> Resource-1 -> Option-1:Value-1 [leaf level]
Application-1 -> Component-1 -> Env-1 -> Resource-1 -> Option-1:Value-1 [leaf level]
Application-1 -> Component-2 -> Env -> Resource-1 -> Option-1:Value-1 [leaf level]
And get this values back through method such as rest service calls.
To my understanding, you are looking for a configuration solution for a system that is comprised of multiple components/services.
In the company where I work we have developed a simple and powerful configuration platform designed to enable an entire system’s configuration from a single location, making the configuration process convenient, clear, safe, organized and informative.
Some of the key features we support include:
A single convenient GUI
Parameters meta-data
Levels and Inheritance
Component upgrades
Meta-data Validation
Third-party component support (we support everyone)
Support and management of namespaces and versions
Advanced features like auto-link
If we release this to open-source, would you be interested?
It is unclear, if you are looking for a coding foundation or a ready to go app.
It is unclear, what is your focus (REST, hierarchy, UI).
It is unclear, why you need the combination of REST, hierarchy, UI and where there is room for alternatives.
short answer brainstorming:
flat ini file + notepad (-> no REST)
mysql + mysql workbench (-> no REST, no built-in hierarchy)
http://sourceforge.net/directory/business-enterprise/enterprise/bsm/cmdb/os:windows/freshness:recently-updated/ (-> ready to go tool)
http://restsql.org/doc/Overview.html (framework to add hierarchy support on sql)
You may have heard of WebMin and VirtualMin which are a CPanel-like UI for server management. A little-known feature of both is that they can be controlled from CLI and also remotely. Here is the documentation on the "VirtualMin Remote API" which allows you to control services and configuration via HTTP and get output via XML or JSON.
Now you still need to create your own modules which may or may not be hard work but it's probably going to be much easier than writing the whole thing from scratch.

Azure : can we check if a setting exists before trying to read it?

I currently use RoleEnvironment.GetConfigurationSettingValue(propertyName) to get the value of a setting defined in my WebRole config file (csdef + cscfg). Ok, sounds right.
This works well if the setting exists but failed with an Exception if the setting is not defined in the csdef and the cscfg.
I'm migrating an existing app to Azure which has many configuration settings in web.config. In my code, to read a setting value, I d'like to test : if it exists in the webRole config (csdef + cscfg) I read it from here, otherwise I read it with ConfigurationManager from web.config.
This would prevent to migrate all settings from my web.config and allow to custom one when the app is already deployed.
Is there a way to do this ?
I don't want to encapsulate the GetConfigurationSettingValue in a try/catch (and read from web.config if I enter the catch) because it's really an ugly way (and mostly it's not performance effective !).
Thanks !
Update for 1.7 Azure SDK.
The CloudConfigurationManager class has been introduced. The allows for a single GetSetting call to look in your cscfg first and then fall back to web.config if the key is not found.
http://msdn.microsoft.com/en-us/LIBRARY/jj157248
Pre 1.7 SDK
Simple answer is no. (That I know of)
The more interesting topic is to consider configuration as a dependency. I have found it to be beneficial to treat configuration settings as a dependency so that the backing implementation can be changed over time. That implementation may be a fake for testing or something more complex like switching from .config/.cscfg to a database implementation for multi-tennent solutions.
Given this configuration wrapper you can write that TryGetSetting as internal method for whatever your source of configuration options are. When this feature is added to the RoleEnvironment members, you would only have to change that internal implementation.

Fetching project code from different repositories

we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html