How to get the currently loaded configuration in a Symfony application? - configuration

I changed the config (config.yml) and want to check, if it worked.
How can I see the actually loaded configurations? Can I access them from a Controller?

When your app first time starts, Sf uses a HttpKernel component to manage the loading of the service container configuration from the application and bundles and also handles the compilation and caching.
After the compilation process has loaded the services from the configuration, extensions and the compiler passes, it is dumped so that the cache can be used next time. The dumped version is then used during subsequent requests as it is more efficient.
More info at: http://symfony.com/doc/current/components/dependency_injection/workflow.html
If you dump $this->container in your controller, you will see all the parameters inside private property parameters, including parameters defined in the parameters.yml file and in the config.yml
Let's say that you want to know what is current value of the parameter locale - you can write this
$this->container->getParameter('locale')
Also, all those parameters are dumped into sf_root/var/cache/your_env/appDevDebugProjectContainer.xml

Related

Ssis logging doesnt work in absense of folder

I have setup ssis logging to a text file. In the connection manager I have selected create file and given path as c:\logs\log.txt
Notice that log file is not generated if the log folder is absent. How to ensure that folder is created if not exists? I tried choosing create folder on connection manager but that is also not creating the log file in absence of the c:\log folder.
How to ensure folder is auto created and log is always generated?
You have a chicken and egg scenario here. Consider the following replication of your problem
I have the connection manager driven by a variable LogFileName which generates the date and time. That file lives in whatever folder is specified by LogPath and the first thing my package does is create the folder if it does not already exist. "This thing can run anywhere and all is good." I've said that plenty and have the scars to show for it.
The following shows the events you can choose to log (based on what is in my package).
I am only logging OnPostExecute events. So I'm good, right? Because the post execute event won't fire until after that File System Task has completed.
If that were the case, you wouldn't have posted a question.
The first event that a package generates is a PackageStart event. Look at that list of events - no ability to filter that out. It doesn't matter whether you want that event logged or not, the logging handlers hear the PackageStart event and record it. Always.
The specified Text file logger should be used to record the data and it's ready to record PackageStart to file... "oh that path doesn't exist."
It will exist once the very first task (File System Task, Create Folder) has completed but alas, it it too late. You either get the complete sequence of events or none.
In your Output window, you would see something like the following
SSIS package "C:\Users\bfellows\source\repos\PackageDeploymentModel\PackageDeploymentModel\ChickenAndEgg_Logging.dtsx" starting.
Error: 0xC001404B at ChickenAndEgg_Logging, Log provider "SSIS log provider for Text files": The SSIS logging provider has failed to open the log. Error code: 0x80070003.
The system cannot find the path specified.
Error: 0xC001404B at ChickenAndEgg_Logging, Log provider "SSIS log provider for Text
files": The SSIS logging provider has failed to open the log. Error code: 0x80070003.
The system cannot find the path specified.
SSIS package "C:\Users\bfellows\source\repos\PackageDeploymentModel\PackageDeploymentModel\ChickenAndEgg_Logging.dtsx" finished: Success.
The package will show your Control Flow objects as all having gone green/OK and the status message will say it "Package execution completed with success" but on the results tab, you'll have a red X showing the log provider couldn't open the log.
What do I do
Preconfigure your environments as part of the package deployment process. When we used the native logger as you're inquiring about, we had a document that laid out all that new developers/new servers needed to have done to ensure all of this stuff was laid out and configured as it needed to be.
Unless a client has a strong business case for using the classic logging methodology, I would encourage them to not use it and instead rely on the SSISDB's native logging. It's cleaner, easier to manage, no special setup required. To quote the fine folks in Cupertino - it just works

logback not working in Flink

I have a single node Flink instance which has the required jars for logback in the lib folder (logback-classic.jar, logback-core.jar, log4j-over-slf4j.jar). I have removed the jars for log4j from the lib folder (log4j-1.2.17.jar, slf4j-log4j12-1.7.7.jar). 'logback.xml' is also correctly updated in 'conf' folder. I have also included 'logback.xml' in the classpath, although this does not seem to be considered while the job is run. Flink refers to logback.xml inside the conf folder only. I have updated pom.xml as per Flink's documentation in order to exclude log4j.
I have some log entries set inside a few map and flatmap functions and some log entries outside those functions (eg: "program execution started").
When I run the job, Flink writes only those logs that are coded outside the transformations. Those logs that are coded inside the transformations (map, flatmap etc) are not getting written to the log file. Also, Flink displays a strange behavior regarding this. Whenever I update the logback jars inside the the lib folder(due to version changes), during the next job run, all logs (even those inside map and flatmap) are written correctly into the log file. But the logs don't get written in any of the runs after that. This means that my 'logback.xml' file is correct and the settings are also correct. But I don't understand why the same settings don't work while the same job is run again.
Update
This issue was reported to the Flink team and they have added this as a bug in JIRA https://issues.apache.org/jira/browse/FLINK-7990

What does it mean to "include the corresponding worker script (app-indexeddb-mirror-worker.js) among your deployable files"?

In the documentation for app-indexeddb-mirror at https://elements.polymer-project.org/elements/app-storage?active=app-indexeddb-mirror there is a section I've copied below. I think I'm running into an error because the indicated file isn't loading, but I'm not sure how to fix the issue. Do I add a reference in staticFileGlobs in sw-precache-config.js or somewhere else?
In order to ensure that operations on IndexedDB block the main browser thread as little as possible, app-indexeddb-mirror relies on a WebWorker to operate on its corresponding IndexedDB database. If you are vulcanizing or otherwise combining your source files before your app is deployed, make sure that you include the corresponding worker script (app-indexeddb-mirror-worker.js) among your deployable files. You can configure the path to the worker script with the worker-url attribute.
The error I'm getting:
GET https://example.com/src/common-worker-scope.js?https://example.com/bower_components/app-storage/app-indexeddb-mirror/app-indexeddb-mirror-worker.js net::ERR_INTERNET_DISCONNECTED

How do I package a log4j configuration file in a NetBeans Platorm application?

Packaging a log4j configuration file in a NetBeans Platform application apparently requires some thinking through. This is what I tried...
I put log4j.xml in src/main/resources/my/package/log4j.xml of some_netbeans_module. The package is a public module package (i.e. classes from this package are used from other packages). I rebuilt the module and confirmed that the file does, in fact, get packaged into the module.
In my classes I get an instance of the logger the way I always do:
static final Logger log = Logger.getLogger(ThisClass.class);
Every NetBeans Platform application has a my_app.conf file which makes it possible to set certain properties. This is where I set log4j.conf:
log4j.configuration="/my/package/log4j.xml"
Now, when I run the application, I see the following output:
[INFO] /home/me/my_app/application/target/my_app/bin/../etc/my_app.conf: 5:
log4j.configuration=/my/package/log4j.xml: not found
What is wrong with the above configuration?
In the my_app.conf file if you append the log4j.configuration property to the default_options property, like so:
default_options="...<other options> -J-Dlog4j.configuration=my/package/log4j.xml"
then this option will get passed to the JVM. Notice that the log4j property has -J-D appended to it. The -J is used by NetBeans to delineate JVM properties and the -D is used by the JVM to delineate a system property.
Also you can/should drop the quotes and the initial / as the quotes are not necessary and NetBeans will complain if you have the initial /
The other way to do this, and the way that I prefer since it doesn't require editing the .conf file, is to put the log4j.xml file into the default package. If you have other requirements that prevents you from doing this then remember that you must put the log4j.configuration property in the app's platform.properties file while your in dev mode and running the app inside of the IDE. Like so:
run.args.extra=-J-Dlog4j.configuration=my/package/log4j.xml
Edit: For questions regarding NetBeans Platform you might have better luck posting to the NetBeans Platform Users forum.

Managing configuration in Erlang application

I need to distribute some sort of static configuration through my application. What is the best practice to do that?
I see three options:
Call application:get_env directly whenever a module requires to get configuration value.
Plus: simpler than other options.
Minus: how to test such modules without bringing the whole application thing up?
Minus: how to start certain module with different configuration (if required)?
Pass the configuration (retrieved from application:get_env), to application modules during start-up.
Plus: modules are easier to test, you can start them with different configuration.
Minus: lot of boilerplate code. Changing the configuration format requires fixing several places.
Hold the configuration inside separate configuration process.
Plus: more-or-less type-safe approch. Easier to track where certain parameter is used and change those places.
Minus: need to bring up configuration process before running the modules.
Minus: how to start certain module with different configuration (if required)?
Another approach is to transform your configuration data into an Erlang source module that makes the configuration data available through exports. Then you can change the configuration at any time in a running system by simply loading a new version of the configuration module.
For static configuration in my own projects, I like option (1). I'll show you the steps I take to access a configuration parameter called max_widgets in an application called factory.
First, we'll create a module called factory_env which contains the following:
-define(APPLICATION, factory).
get_env(Key, Default) ->
case application:get_env(?APPLICATION, Key) of
{ok, Value} -> Value;
undefined -> Default
end.
set_env(Key, Value) ->
application:set_env(?APPLICATION, Key, Value).
Next, in a module that needs to read max_widgets we'll define a macro like the following:
-define(MAX_WIDGETS, factory_env:get_env(max_widgets, 1000)).
There are a few nice things about this approach:
Because we used application:set_env/3 and application:get_env/2, we don't actually need to start the factory application in order to have our tests pass.
max_widgets gets a default value, so our code will still work even if the parameter isn't defined.
A second module could use a different default value for max_widgets.
Finally, when we are ready to deploy, we'll put a sys.config file in our priv directory and load it with -config priv/sys.config during startup. This allows us to change configuration parameters on a per-node basis if desired. This cleanly separates configuration from code - e.g. we don't need to make another commit in order to change max_widgets to 500.
You could use a process (a gen_server maybe?) to store your configuration parameters in its state. It should expose a get/set interface. If a value hasn't been explicitly set, it should retrieve a default value.
-export([get/1, set/2]).
...
get(Param) ->
gen_server:call(?MODULE, {get, Param}).
...
handle_call({get, Param}, _From, State) ->
case lookup(Param, State#state.params) of
undefined ->
application:get_env(...);
Value ->
{ok, Value}
end.
...
You could then easily mockup this module in your tests. It will also be easy to update the process with some new configuration at run-time.
You could use pattern matching and tuples to associate different configuration parameters to different modules:
set({ModuleName, ParamName}, Value) ->
...
get({ModuleName, ParamName}) ->
...
Put the process under a supervision tree, so it's started before all the other processes which are going to need the configuration.
Oh, I'm glad nobody suggested parametrized modules so far :)
I'd do option 1 for static configuration. You can always test by setting options via application:set_env/3,4. The reason you want to do this is that your tests of the application will need to run the whole application anyway at some time. And the ability to set test-specific configuration at that point is really neat.
The application controller runs by default, so it is not a problem that you need to go the application-way (you need to do that anyway too!)
Finally, if a process needs specific configuration, say so in the configuration data! You can store any Erlang-term, in particular, you can store a term which makes you able to override configuration parameters for a specific node.
For dynamic configuration, you are probably better off by using a gen_server or using the newest gproc features that lets you store such dynamic configuration.
I've also seen people use a .hrl (erlang header file) where all the configuration is defined and include it at the start of any file that needs configuration.
It makes for very concise configuration lookups, and you get configuration of arbitrary complexity.
I believe you can also reload configuration at runtime by performing hot code reloading of the module. The disadvantage is that if you use configuration in several modules and reload only one of them, only that one module will get its configuration updated.
However, I haven't actually checked if it works like that, and I couldn't find definitive documentation on how .hrl and hot code reloading interact, so make sure to double-check this before you actually use it.