Logback MDC print variables at runtime - logback

I was wondering if it is possible to print a previously added MDC value from a logger call?
example:
MDC.put("user","tom")
log.info("Hello %X{user}");
instead of adding it to the layout pattern.
The reason for this is that I call MDC elsewhere and I log at the end of my logic, but I want to conditionally log different values. I know a workaround might be different appenders.

Since MDC is essentially a map, you can always use .get() to retrieve values previously stored in it:
MDC.put("user", "tom");
log.info("Hello, {}", MDC.get("user"));

Related

How to pass parameters or variables of arm template in log query

While working on log Queries in an arm template, I stuck with how to pass parameter values or variable values in the log Query.
parameters:
{
"resProviderName":
{
"value": "Microsoft.Storage"
}
}
For Eg:
AzureMetrics | where ResourceProvider == **parameters('resProviderName')** | where Resource == 'testacc'
Here I am facing an error like, it was taking parameters('resProviderName') as a value and it was not reading value from that particular parameter "resProviderName" and my requirement is to take the values from parameters or variables and should not hardcode like what I did in the above query as Resource=='testacc'.
Do we have any option to read the values from either parameters or variables in the log query?
If so, please help on this issue.
The answer to this would depend on what this query segment is a part of, and how your template is structured.
It appears that you're trying to reference a resource in the query. It is best to use one of the many available template resource functions if you want the details of the resource like its resourceID (within the resources section). When referencing a resource that is deployed in the same template, provide the name of the resource through a parameter. When referencing a resource that isn't deployed in the same template, fetch the resource ID.
Also, I'd recommend referring to the ARM snippet given this example to understand how queries can be constructed as custom variables as opposed to the other way round. According to ARM template best practices, variables should be used for values that you need to use more than once in a template. If a value is used only once, a hard-coded value makes your template easier to read. For more info, please take a look at this doc.
Hope this helps.
This is an old question but I bumped into this while searching for the same issue.
I'm much more familiar with Bicep templates so what I did to figure out was to create a Bicep template, construct the query by using the variables and compile it. This will generate a ARM template and you can analyze it.
I figured out you can use both concat or format functions in order to construct your query using variables.
I preferred the format one since it looks more elegant and readable (also, Bicep build generates the string using the format function).
So based on this example, the query would be something like this:
query: "[format('AzureMetrics | where ResourceProvider == {0} | where Resource == ''testacc'' ', parameters('resProviderName') )]"
Don't forget to escape the ' in the ARM template which you do by doubling the single quote.
I hope this helps some people.
André

SSIS Custom Component: Having any IsSorted Property and output metadata

I have a Custom Synchronous Component that works fine and I use it.
Recently, I sent some Sorted data from a sort component to it (or an IsSorted=true Source Component)
but then i couldn't use the output as the input of a merge join due to not having a IsSorted=true property.
So I have to sort data again and it reduces the package performance too much.
Also I can't have any metadata same as Input, for my output(s) during design time.
I guess when my component is synchronous so it might be sorted as its input
if not, how to make the component output data sorted!
I really wanna know if there is any clever solution to solve this detailed issues about Custom Pipeline Components.
As your component is synchronous, somewhere in your code you are synchronizing the output, IDTSOutput1xx, with the input, IDTSInput1xx, with code like this:
output.SynchronousInputID = input.ID;
In a synchronous component the PipelineBuffer (the one exposed to the output, exposed in ProcessInput) is based upon the input buffer (usually with modified or added columns) respecting the original row ordering.
So, if you check that the input is ordered, you can assure that the output is also ordered. And there is a property that you can use to read this information from the input and set it in the output:
output.IsOrdered = input.IsOrdered
Take into account that you could set this property to true even if the output was not ordered, but in this case, you're relying on the information provided from the input, which should be correct.
You should only change this property explicitly to true in an asynchronous component in which you really sort the rows before returning them. But, as I told, you could lie, and set this property to true without returning ordered rows. In other words, it's informative metadata.
If this doesn't work for you, you'll also have to set the SortKeyPosition of the required columns in output.OutputColumnCollection. This information is also used by Merge Join to ensure that the input apart from being ordered, is ordered by the required columns.
If you want to see how you can do this using the SSIS task editor, instead of doing it "automagically" in your custom component, please, read IsSorted properties in SSIS or SSIS #98 – IsSorted is true, but sorted on what?
The Merge Join Transform in SSIS has a few requirements. To join two data sources,The data sources must be sorted and there must be a key that you can join them with. In some cases, I perform the join in the OLEDB SOURCE from Query.

Custom `returnFormat` in ColdFusion 10 or 11?

I've a function which is called from different components, .cfms or remotely. It returns the results of a query.
Sometimes the response from this function is manually inspected - a person may want to see the ID of a specific record so they can use it elsewhere.
The provided return formats, being wddx, json, plain all aren't very easily readable for a layman.
I'd love to be able to create a new return format: dump, where the result first writeDumped and then returned to the caller.
I know there'd be more complicated ways of solving this, like writing a function dump, and calling that like a proxy by providing the component, function and parameters so it can call that function and return the results.
However I don't think it's worth going that far. I figured it'd be great if I could just write a new return format, because that's just... intuitive and nice, and I may also be able to use that technique to solve different problems or improve various workflows.
Is there a way to create custom function returnFormats in ColdFusion 10 or 11?
(From comments)
AFAIK, you cannot add a custom returntype to a cffunction, but take a look at OnCFCRequest. Might be able to use it to build something more generic that responds differently whenever a custom URL parameter is passed, ie url.returnformat=yourType. Same net effect as dumping and/or manipulating the result manually, just a little more automated.
From the comments, the return type of the function is query. That being the case, there is simply no need for a custom return format. If you want to dump the query results, do so.
queryVar = objectName.nameOfFunction(arguments);
writeDump (queryVar);

mockito when thenReturn does not return stubbed array list

List<Populate> fullAttrPopulateList = getFullAtrributesPopulateList(); //Prepare return list
when(mockEmployeeDao.getPopulateList(null)).thenReturn(fullAttrPopulateList);
MyDTO myDto = testablePopService.getMyPopData(); //Will call mockEmployeeDao.getPopulateList(null)
//verify(mockEmployeeDao,times(1)).getPopulateList(null);
assertEquals(fullAttrPopulateList.size(), myDto.getPopData().size()); //This fails because myDto.getPopData().size() returns 0
As you can see testablePopService.getMyPopData() calls mockEmployeeDao.getPopulateList(null) but when I debug it a zero sized list returns instead of the stubbed array list which is prepared by getFullAtrributesPopulateList();
If I uncomment the verify statement, it passes the test meaning getPopulateList(null) behavior does get called.
Can anyone give me some advice why my stubbed array list cannot be returned even it is verified the expected behavior happened? How come an empty array list returns rather than a null if I did something wrong?
First, check that the method is non-final and visible throughout its hierarchy. Mockito can have trouble mocking methods that are hidden (e.g. overriding a package-private abstract class's implementation); also, Mockito is entirely unable to mock final methods, or even to detect that the method is final and warn you about it.
You may also want to check that your call is using the overload you expect. Mockito's default list return value is an empty list, so you may simply stubbing one method and seeing the default value for another. You can see which method interactions Mockito has added by adding a call to verifyZeroInteractions temporarily, and Mockito will throw an exception with a list of calls that mock has received. You could also add verifyNoMoreInteractions, and even leave it in there, at the expense of test brittleness (i.e. the test breaks when the actual code continues to work).

api documentation and "value limits": do they match?

Do you often see in API documentation (as in 'javadoc of public functions' for example) the description of "value limits" as well as the classic documentation ?
Note: I am not talking about comments within the code
By "value limits", I mean:
does a parameter can support a null value (or an empty String, or...) ?
does a 'return value' can be null or is guaranteed to never be null (or can be "empty", or...) ?
Sample:
What I often see (without having access to source code) is:
/**
* Get all readers name for this current Report. <br />
* <b>Warning</b>The Report must have been published first.
* #param aReaderNameRegexp filter in order to return only reader matching the regexp
* #return array of reader names
*/
String[] getReaderNames(final String aReaderNameRegexp);
What I like to see would be:
/**
* Get all readers name for this current Report. <br />
* <b>Warning</b>The Report must have been published first.
* #param aReaderNameRegexp filter in order to return only reader matching the regexp
* (can be null or empty)
* #return array of reader names
* (null if Report has not yet been published,
* empty array if no reader match criteria,
* reader names array matching regexp, or all readers if regexp is null or empty)
*/
String[] getReaderNames(final String aReaderNameRegexp);
My point is:
When I use a library with a getReaderNames() function in it, I often do not even need to read the API documentation to guess what it does. But I need to be sure how to use it.
My only concern when I want to use this function is: what should I expect in term of parameters and return values ? That is all I need to know to safely setup my parameters and safely test the return value, yet I almost never see that kind of information in API documentation...
Edit:
This can influence the usage or not for checked or unchecked exceptions.
What do you think ? value limits and API, do they belong together or not ?
I think they can belong together but don't necessarily have to belong together. In your scenario, it seems like it makes sense that the limits are documented in such a way that they appear in the generated API documentation and intellisense (if the language/IDE support it).
I think it does depend on the language as well. For example, Ada has a native data type that is a "restricted integer", where you define an integer variable and explicitly indicate that it will only (and always) be within a certain numeric range. In that case, the datatype itself indicates the restriction. It should still be visible and discoverable through the API documentation and intellisense, but wouldn't be something that a developer has to specify in the comments.
However, languages like Java and C# don't have this type of restricted integer, so the developer would have to specify it in the comments if it were information that should become part of the public documentation.
I think those kinds of boundary conditions most definitely belong in the API. However, I would (and often do) go a step further and indicate WHAT those null values mean. Either I indicate it will throw an exception, or I explain what the expected results are when the boundary value is passed in.
It's hard to remember to always do this, but it's a good thing for users of your class. It's also difficult to maintain it if the contract the method presents changes (like null values are changed to no be allowed)... you have to be diligent also to update the docs when you change the semantics of the method.
Question 1
Do you often see in API documentation (as in 'javadoc of public functions' for example) the description of "value limits" as well as the classic documentation?
Almost never.
Question 2
My only concern when I want to use this function is: what should I expect in term of parameters and return values ? That is all I need to know to safely setup my parameters and safely test the return value, yet I almost never see that kind of information in API documentation...
If I used a function not properly I would expect a RuntimeException thrown by the method or a RuntimeException in another (sometimes very far) part of the program.
Comments like #param aReaderNameRegexp filter in order to ... (can be null or empty) seems to me a way to implement Design by Contract in a human-being language inside Javadoc.
Using Javadoc to enforce Design by Contract was used by iContract, now resurrected into JcontractS, that let you specify invariants, preconditions, postconditions, in more formalized way compared to the human-being language.
Question 3
This can influence the usage or not for checked or unchecked exceptions.
What do you think ? value limits and API, do they belong together or not ?
Java language doesn't have a Design by Contract feature, so you might be tempted to use Execption but I agree with you about the fact that you have to be aware about When to choose checked and unchecked exceptions. Probably you might use unchecked IllegalArgumentException, IllegalStateException, or you might use unit testing, but the major problem is how to communicate to other programmers that such code is about Design By Contract and should be considered as a contract before changing it too lightly.
I think they do, and have always placed comments in the header files (c++) arcordingly.
In addition to valid input/output/return comments, I also note which exceptions are likly to be thrown by the function (since I often want to use the return value for...well returning a value, I prefer exceptions over error codes)
//File:
// Should be a path to the teexture file to load, if it is not a full path (eg "c:\example.png") it will attempt to find the file usign the paths provided by the DataSearchPath list
//Return: The pointer to a Texture instance is returned, in the event of an error, an exception is thrown. When you are finished with the texture you chould call the Free() method.
//Exceptions:
//except::FileNotFound
//except::InvalidFile
//except::InvalidParams
//except::CreationFailed
Texture *GetTexture(const std::string &File);
#Fire Lancer: Right! I forgot about exception, but I would like to see them mentioned, especially the unchecked 'runtime' exception that this public method could throw
#Mike Stone:
you have to be diligent also to update the docs when you change the semantics of the method.
Mmmm I sure hope that the public API documentation is at the very least updated whenever a change -- that affects the contract of the function -- takes place. If not, those API documentations could be drop altogether.
To add food to yours thoughts (and go with #Scott Dorman), I just stumble upon the future of java7 annotations
What does that means ? That certain 'boundary conditions', rather than being in the documentation, should be better off in the API itself, and automatically used, at compilation time, with appropriate 'assert' generated code.
That way, if a '#CheckForNull' is in the API, the writer of the function might get away with not even documenting it! And if the semantic change, its API will reflect that change (like 'no more #CheckForNull' for instance)
That kind of approach suggests that documentation, for 'boundary conditions', is an extra bonus rather than a mandatory practice.
However, that does not cover the special values of the return object of a function. For that, a complete documentation is still needed.