Configuring two logback appenders independently - configuration

I might be asking something trivial, but what I've tried doesn't seem to work. With my "MAIN" appender, I want to log all "info"s everywhere, except in a third-party package (let's call it boring), which produces too many of them (so I look at warnings only). Additionally, I want to log "debug"s in my interesting package. This works fine with the following logback.groovy:
root(INFO, ["MAIN"])
logger("interesting", DEBUG, ["MAIN"])
logger("boring", WARN, ["MAIN"])
Now I want to configure a different appender logging one level more like
root(DEBUG, ["DETAIL"])
logger("interesting", TRACE, ["DETAIL"])
logger("boring", INFO, ["DETAIL"])
This also works, but when I put both together, it doesn't. I can imagine that this is caused by the fact that each Logger is either on or off for a given level. I'm aware that for the behavior I want, the loggers in the "boring" package must be on the INFO level (because of the DETAIL appender) and that the messages for the MAIN appender are to be filtered, but I can't see how to configure this.
UPDATE
I see I was doing nearly everything wrong. The line
logger("interesting", DEBUG, ["MAIN"])
says nothing like "set the level to DEBUG for the MAIN appender and the package interesting and below"), it does two independent things instead:
set the level to DEBUG for the package interesting and below
add the MAIN appender to the "interesting" Logger

This seems to be impossible without writing an own filter, which is fortunately pretty easy. I ended up with something like
root(DEBUG, ["DETAIL", "MAIN"])
// settings for the most detailed appender
logger("interesting", TRACE)
logger("boring", INFO)
appender("MAIN", ...) {
...
filter = new MyFilter()
.deny("boring", INFO)
.accept("interesting", DEBUG)
.deny("", DEBUG)
}
where deny and accept are my methods adding entries to MyFilter. The entries are evaluated sequentially, i.e.,
in package boring and below, everything with level INFO or below gets denied
in package interesting and below, everything with level DEBUG or above gets accepted
otherwise, in any package, everything with level DEBUG or above gets denied
Inspired by this question.

Related

Correct way to pass runtime configuration to elixir processes

I'm trying to deploy an app to production and getting a little confused by environment and application variables and what is happening at compile time vs runtime.
In my app, I have a genserver process that requires a token to operate. So I use config/releases.exs to set the token variable at runtime:
# config/releases.exs
import Config
config :my_app, :my_token, System.fetch_env!("MY_TOKEN")
Then I have a bit of code that looks a bit like this:
defmodule MyApp.SomeService do
use SomeBehaviour, token: Application.get_env(:my_app, :my_token),
other_config: :stuff
...
end
In production the genserver process (which does some http stuff) gives me 403 errors suggesting the token isn't there. So can I clarify, is the use keyword getting evaluated at compile time (in which case the application environment doest exist yet)?
If so, what is the correct way of getting runtime environment variables in to a service like this. Is it more correct to define the config in application.ex when starting the process? eg
children = [
{MyApp.SomeService, [
token: Application.get_env(:my_app, :my_token),
other_config: :stuff
]}
...
]
Supervisor.start_link(children, opts)
I may have answered my own questions here, but would be helpful to get someone who knows what they're doing confirm and point me in the right way. Thanks
elixir has two stages: compilation and runtime, both written in Elixir itself. To clearly understand what happens when one should figure out, that everything is macro and Elixir, during compilation stage, expands these macros until everything is expanded. That AST comes to runtime.
In your example, use SomeBehaviour, foo: :bar is implicitly calling SomeBehaviour.__using__/1 macro. To expand the AST, it requires the argument (keyword list) to be expanded as well. Hence, Application.get_env(:my_app, :my_token) call happens in compile time.
There are many possibilities to move it to runtime. If you are the owner of SomeBehaviour, make it accept the pair {:my_app, :my_token} and call Application.get_env/2 somewhere from inside it.
Or, as you suggested, pass it as a parameter to children; this code belongs to function body, meaning it won’t be attempted to expand during compilation stage, but would rather be passed as AST to the resulting BEAM to be executed in runtime.

Dealing with invalid filehandles (and maybe other invalid objects too)

As indicated by Tom Browder in this issue, the $*ARGFILES dynamic variable might contain invalid filehandles if any of the files mentioned in the command line is not present.
for $*ARGFILES.handles -> $fh {
say $fh;
}
will fail with and X::AdHoc exception (this should probably be improved too):
Failed to open file /home/jmerelo/Code/perl6/my-perl6-examples/args/no-file: No such file or directory
The problem will occur as soon as the invalid filehandle is used for anything. Would there be a way of checking if the filehandle is valid before incurring in an exception?
You can check if something is a Failure by checking for truthiness or definedness without the Failure throwing:
for $*ARGFILES.handles -> $fh {
say $fh if $fh; # check truthiness
.say with $fh; # check definedness + topicalization
}
If you still want to throw the Exception that the Failure encompasses, then you can just .throw it.
TL;DR I thought Liz had it nailed but it seems like there's a bug or perhaps Ugh.
A bug?
It looks like whenever the IO::CatHandle class's .handles method reaches a handle that ought by rights produce a Failure (delaying any exception throw) it instead immediately throws an exception (perhaps the very one that would work if it were just delayed or perhaps something broken).
This seems either wrong or very wrong.
Ugh
See the exchange between Zoffix and Brad Gilbert and Zoffix's answer to the question How should I handle Perl 6 $*ARGFILES that can't be read by lines()?
Also:
https://github.com/rakudo/rakudo/issues/1313
https://github.com/rakudo/rakudo/search?q=argfiles&type=Issues
https://github.com/rakudo/rakudo/search?q=cathandle&type=Issues
A potential workaround is currently another bug?
In discussing "Implement handler for failed open on IO::CatHandle" Zoffix++ closed it with this code as a solution:
.say for ($*ARGFILES but role {
method next-handle {
loop {try return self.IO::CatHandle::next-handle}
}
})
I see that tbrowder has reopened this issue as part of the related issue this SO is about saying:
If this works, it would at least be a usable example for the $*ARGFILES var in the docs.
But when I run it in 6.d (and see similar results for a 6.c), with or without valid input, I get:
say not yet implemented
(similar if I .put or whatever).
This is nuts and suggests something gutsy is getting messed up.
I've searched rt and gh/rakudo issues for "not yet implemented" and see no relevant matches.
Another workaround?
Zoffix clearly intended their code as a permanent solution, not merely a workaround. But it unfortunately doesn't seem to work at all for now.
The best I've come up with so far:
try {$*ARGFILES} andthen say $_ # $_ is a defined ArgFiles instance
orelse say $!; # $! is an error encountered inside the `try`
Perhaps this works as a black-and-white it either all works or none of it does solution. (Though I'm not convinced it's even that.)
What the doc has to say about $*ARGFILES
$*ARGFILES says it is an instance of
IO::ArgFiles which is doc'd as a class which
exists for backwards compatibility reasons and provides no methods.
And
All the functionality is inherited from
IO::CatHandle which is subtitled as
Use multiple IO handles as if they were one
and doc'd as a class that is
IO::Handle which is subtitled as
Opened file or stream
and doc'd as a class that doesn't inherit from any other class (so defaults to inheriting from Any) or do any role.
So, $*ARGFILES is (exactly functionally the same as) a IO::CatHandle object which is (a superset of the functionality of) an IO::Handle object, specifically:
The IO::CatHandle class provides a means to create an IO::Handle that seamlessly gathers input from multiple IO::Handle and IO::Pipe sources. All of IO::Handle's methods are implemented, and while attempt to use write methods will (currently) throw an exception, an IO::CatHandle is usable anywhere a read-only IO::Handle can be used.
Exploring the code for IO::CatHandle
(To be filled in later?)

ConcurrentModificationException in Android while accessing Shared Preferences

When I develop an android app, I run into the exception which I do not have any clue; I have googled related topics but none of them helped.
Fatal Exception: java.util.ConcurrentModificationException
java.util.HashMap$HashIterator.nextEntry (HashMap.java:806)
java.util.HashMap$KeyIterator.next (HashMap.java:833)
com.android.internal.util.XmlUtils.writeSetXml (XmlUtils.java:298)
com.android.internal.util.XmlUtils.writeValueXml (XmlUtils.java:447)
com.android.internal.util.XmlUtils.writeMapXml (XmlUtils.java:241)
com.android.internal.util.XmlUtils.writeMapXml (XmlUtils.java:181)
android.app.SharedPreferencesImpl.writeToFile (SharedPreferencesImpl.java:596)
android.app.SharedPreferencesImpl.access$800 (SharedPreferencesImpl.java:52)
android.app.SharedPreferencesImpl$2.run (SharedPreferencesImpl.java:511)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1112)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:587)
java.lang.Thread.run (Thread.java:841)
Preferences are thread safe(!), but not process safe. The answer of
#mohan mishra simply not true, no need to synchronize everything. The problem here, as statet out in another question is, that per documentation you MUST NOT modify any instance that is returned by getStringSet and getAll
getStringSet()
Note that you must not modify the set instance returned by this call.
The consistency of the stored data is not guaranteed if you do, nor is
your ability to modify the instance at all.
getAll()
Note that you must not modify the collection returned by this method,
or alter any of its contents. The consistency of your stored data is
not guaranteed if you do.
To the other question
Documentation
Please ensure that you are not accessing the preferences from any type of background thread. Also all your methods to add to preference must be synchronised(if you have your own preference managing class)

Warn (or fail) if a package is run without having overriden every pkg connectionstring with a config file entry

It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.

Logging different project libraries, with a single logging library

I have a project in Apps script that uses several libraries. The project needed a more complex logger (logging levels, color coding) so I wrote one that outputs to google docs. All is fine and dandy if I immediately print the output to the google doc, when I import the logger in all of the libraries separately. However I noticed that when doing a lot of logging it takes much longer than without. So I am looking for a way to write all of the output in a single go at the end when the main script finishes.
This would require either:
Being able to define the logging library once (in the main file) and somehow accessing this in the attached libs. I can't seem to find a way to get the main projects closure from within the libraries though.
Some sort of singleton logger object. Not sure if this is possible from with a library, I have trouble figuring it out either way.
Extending the built-in Logger to suit my needs, not sure though...
My project looks at follows:
Main Project
Library 1
Library 2
Library 3
Library 4
This is how I use my current logger:
var logger = new BetterLogger(/* logging level */);
logger.warn('this is a warning');
Thanks!
Instead of writing to the file at each logged message (which is the source of your slow down), you could write your log messages to the Logger Library's ScriptDB instance and add a .write() method to your logger that will output the messages in one go. Your logger constructor can take a messageGroup parameter which can serve as a unique identifier for the lines you would like to write. This would also allow you to use different files for logging output.
As you build your messages into proper output to write to the file (don't write each line individually, batch operations are your friend), you might want to remove the message from the ScriptDB. However, it might also be a nice place to pull back old logs.
Your message object might look something like this:
{
message: "My message",
color: "red",
messageGroup: "groupName",
level: 25,
timeStamp: new Date().getTime(), //ScriptDB won't take date objects natively
loggingFile: "Document Key"
}
The query would look like:
var db = ScriptDb.getMyDb();
var results = db.query({messageGroup: "groupName"}).sortBy("timeStamp",db.NUMERIC);