how to add prefix for multi lines string? - boost-log

Some times, the source string contain multi lines. I found no prefix since second line in log result. I don't want to split the source string to single line, have any idea?
int main()
{
BOOST_LOG_TRIVIAL(info) << "hello\nworld";
return 0;
}
output:
[2017-12-05 09:49:34.957813] [0x000028d4] [info] hello
world
I want the following output:
[2017-12-05 10:01:35.033017] [0x00000af8] [info] hello
[2017-12-05 10:01:35.033017] [0x00000af8] [info] world

Unfortunately, this is not possible to do with Boost.Log. Whatever output you generate in the streaming expression, including newlines, is considered a single log record, which is formatted once, hence it only has one timestamp and other attributes in the output.
You should manually separate the lines into different log records. If the log message text is received from an external source (e.g. network or as a callback from some library), you will have to split the text by newlines yourself.

Related

NIFI - Using one ReplaceText Processor how to add brackets at the beginning and end of each line

I have the following 10000 rows of log file every 5 seconds.
log_datetime1 host_name1 log_message1
log_datetime2 host_name2 log_message2
log_datetime3 host_name3 log_message3
I want to send them to kudu or parquet table as the following JSON
{"cureent_datetime":"datetime", "log_data":"log_datetime1 host_name1 log_message1"}
{"cureent_datetime":"datetime", "log_data":"log_datetime2 host_name2 log_message2"}
{"cureent_datetime":"datetime", "log_data":"log_datetime3 host_name3 log_message3"}
Currently I'm using Two ReplaceText Processors. One to add the
{"cureent_datetime":"datetime", "log_data":" at the beginning of each line of 10000 rows log file and the second one to add "} at the end of each line.
Was wondering if I could do the both step in one ReplaceText proecssor.
Using the search pattern (.+)(?=\n) and the replacement pattern {"current_datetime":"datetime", "log_data":"$1"} will result in the desired output. The search pattern looks for text which is followed by a newline, and the replacement includes the capture group inside the templated JSON structure.

Legal change to JSON input invalidates simple jq

Another department continually updates a JSON file that I then query. Its format is three lists of similar-looking dictionaries:
{
"levels":
[
{"a":1, "b":False, "c":"2012", "d":"2017"}
,{"a":2, "b":True, "c":"2013", "d":"9999"}
,...
]
,"costs":
[
{"e":12, "f":"foo", "g":"blarg", "h":"2015", "i":"2018"}
,{"e":-3, "f":"foo", "g":"glorb", "h":"2013", "i":"9999"}
,...
]
,"recipes":
[
{"j":"BAZ", "k":["blarg","glorb","bleeg"], "l":"dill", "m":"2016", "n":"2017"}
,{"j":"BAZ", "k":["blarg","bleeg"], "l":"dill", "m":"2017", "n":"9999"}
,...
]
} # line 3943 (see below)
Recently, my simple jq queries like
jq '.["recipes"][] | select(.l | test("ill"))' < jsonfile
stopped returning all of the results they should (e.g. returning only one of the two "dill" lines above) and started printing this error message:
jq: error (at <stdin>:3943): null (null) cannot be matched, as it is not a string
Line 3943 mentioned in the error is the final line of the file. Queries against the "levels" and "costs" sections of the file continue to work like normal; it's only the "recipes" section of the file that is breaking, as though jq thinks the closing brace of the file is still part of the "recipes" section.
To me this suggests there's been a formatting change or error in the last section of the file. However, software other than jq (e.g. python) doesn't report any problems parsing it. Before I start going through the input line by line ... does this error message indicate anything obvious to a jq expert?
Alas, I do not keep old versions of the file around for comparison. (I think I will start today.)
(self-answering after a bit of investigating)
I think there was no formatting error or change in formatting in the input.
I don't know why my query syntax did not encounter errors previously (maybe I just did not notice), but it seems that the entries in the "recipes" section often do not contain an "l" attribute, and jq will cease processing as soon as it encounters one that does not.
I also don't know why jq does not generate the same error message for every record that lacks that attribute, nor why it waits to the final line of the input to generate the single message. (Maybe that behavior is documented somewhere.)
In any case, I fixed the error (not just the message, but also the failure to display all relevent records) by testing for the presence of the attribute first:
jq '.["recipes"][] | select(has("l") and (.l | test("ill")))' < jsonfile

What does 'multiline strings are different' meant by from RIDE (Robot Framework) output?

i am trying to compare two csv file data and followed below process in RIDE -
${csvA} = Get File ${filePathA}
${csvB} = Get File ${filePathB}
Should Be Equal As Strings ${csvA} ${csvB}
Here are my two csv contents -
csvA data
Harshil,45,8.03,DMJ
Divy,55,8,VVN
Parth,1,9,vvn
kjhjmb,44,0.5,bugg
csvB data
Harshil,45,8.03,DMJ
Divy,55,78,VVN
Parth,1,9,vvnbcb
acc,5,6,afafa
As few of the data is not in match, when i Run the code in RIDE, the result is FAIL. But in the log below data is shown -
**
Multiline strings are different:
--- first
+++ second
## -1,4 +1,4 ##
Harshil,45,8.03,DMJ
-Divy,55,8,VVN
-Parth,1,9,vvn
-kjhjmb,44,0.5,bugg
+Divy,55,78,VVN
+Parth,1,9,vvnbcb
+acc,5,6,afafa**
I would like to know the meaning of ---first +++second ##-1,4+1,4## content.
Thanks in advance!
When robot compares multiline strings (data that has newlines in it), it uses the standard unix tool diff to show the differences. Those characters are all part of what's called a unified diff. Even though you pass in raw data, it's treating the data as two files and showing the differences between the two in a format familiar to most programmers.
Here are two references to read more about the format:
What does "## -1 +1 ##" mean in Git's diff output?. (stackoverflow)
the diff man page (gnu.org)
In short, the ## gives you a reference for which line numbers are different, and the + and - show you which lines are different.
In your specific example it's telling you that three lines were different between the two strings: the line beginning with Divy, the line beginning with Parth, and the line beginning with acc. Since the line beginning with Harshil does not show a + or -, that means it was identical between the two strings.

Term for a "Special Identifier" Embedded in String Data

I'm mostly at a loss for how to describe this, so I'll start with a simple example that is similar to some JSON I'm working with:
"user_interface": {
username: "Hello, %USER.username%",
create_date: "Your account was created on %USER.create_date%",
favorite_color: "Your favorite color is: %USER.fav_color%"
}
The "special identifiers" located in the username create_date and favorite_color fields start and end with % characters, and are supposed to be replaced with the correct information for that particular user. An example for the favorite_color field would be:
Your favorite color is: Orange
Is there a proper term for these identifiers? I'm trying to search google for best practices or libraries when parsing these before I reinvent the wheel, but everything I can think of results in a sea of false-positives.
Just some thoughts on the subject of %special identifier%. Let's take a look at a small subset of examples, that implement almost similar strings replacement.
WSH Shell ExpandEnvironmentStrings Method
Returns an environment variable's expanded value.
WSH .vbs code snippet
Set WshShell = WScript.CreateObject("WScript.Shell")
WScript.Echo WshShell.ExpandEnvironmentStrings("WinDir is %WinDir%")
' WinDir is C:\Windows
.NET Composite Formatting
The .NET Framework composite formatting feature takes a list of objects and a composite format string as input. A composite format string consists of fixed text intermixed with indexed placeholders, called format items, that correspond to the objects in the list. The formatting operation yields a result string that consists of the original fixed text intermixed with the string representation of the objects in the list.
VB.Net code snippet
Console.WriteLine(String.Format("Prime numbers less than 10: {0}, {1}, {2}, {3}, {4}", 1, 2, 3, 5, 7 ))
' Prime numbers less than 10: 1, 2, 3, 5, 7
JavaScript replace Method (with RegEx application)
... The match variables can be used in text replacement where the replacement string has to be determined dynamically... $n ... The nth captured submatch ...
Also called Format Flags, Substitution, Backreference and Format specifiersJavaScript code snippet
console.log("Hello, World!".replace(/(\w+)\W+(\w+)/g, "$1, dear $2"))
// Hello, dear World!
Python Format strings
Format strings contain “replacement fields” surrounded by curly braces {}. Anything that is not contained in braces is considered literal text, which is copied unchanged to the output...
Python code snippet
print "The sum of 1 + 2 is {0}".format(1+2)
# The sum of 1 + 2 is 3
Ruby String Interpolation
Double-quote strings allow interpolation of other values using#{...} ...
Ruby code snippet
res = 3
puts "The sum of 1 + 2 is #{res}"
# The sum of 1 + 2 is 3
TestComplete Custom String Generator
... A string of macros, text, format specifiers and regular expressions that will be used to generate values. The default value of this parameter is %INT(1, 2147483647, 1) %NAME(ANY, FULL) lives in %CITY. ... Also, you can format the generated values using special format specifiers. For instance, you can use the following macro to generate a sequence of integer values with the specified minimum length (3 characters) -- %0.3d%INT(1, 100, 3).
Angular Expression
Angular expressions are JavaScript-like code snippets that are mainly placed in interpolation bindings such as{{ textBinding }}...
Django Templates
Variables are surrounded by {{ and }} like this:My first name is {{ first_name }}. My last name is {{ last_name }}.With a context of {'first_name': 'John', 'last_name': 'Doe'}, this template renders to:My first name is John. My last name is Doe.
Node.js v4 Template strings
... Template strings can contain place holders. These are indicated by the Dollar sign and curly braces (${expression}). The expressions in the place holders and the text between them get passed to a function...
JavaScript code snippet
var res = 3;
console.log(`The sum of 1 + 2 is ${res}`);
// The sum of 1 + 2 is 3
C/C++ Macros
Preprocessing expands macros in all lines that are not preprocessor directives...
Replacement in source code.
C++ code snippet
std::cout << __DATE__;
// Jan 8 2016
AutoIt Macros
AutoIt has an number of Macros that are special read-only variables used by AutoIt. Macros start with the # character ...
Replacement in source code.
AutoIt code snippet
MsgBox(0, "", "CPU Architecture is " & #CPUArch)
; CPU Architecture is X64
SharePoint solution Replaceable Parameters
Replaceable parameters, or tokens, can be used inside project files to provide values for SharePoint solution items whose actual values are not known at design time. They are similar in function to the standard Visual Studio template tokens... Tokens begin and end with a dollar sign ($) character. Any tokens used are replaced with actual values when a project is packaged into a SharePoint solution package (.wsp) file at deployment time. For example, the token $SharePoint.Package.Name$ might resolve to the string "Test SharePoint Package."
Apache Ant Replace Task
Replace is a directory based task for replacing the occurrence of a given string with another string in selected file... token... the token which must be replaced...
So, based on functional context I would call it %token% (such a flavor of strings with an identified "meaning").

Entry delimiter of JSON files for Hive table

We are collecting JSON data (public social media posts in particular) via REST API invocations, which we plan to dump into HDFS, then abstract a Hive table on top it using SerDe. I wonder though what would be the appropriate delimiter per JSON entry in a file? Is it new line ("\n")? So it would look like this:
{ id: entry1 ... post: }
{ id: entry2 ... post: }
...
{ id: entryn ... post: }
How about if we encounter a new line character within the JSON data itself, for example in post?
The best way would be one record per line, separated by "\n" exactly as you guessed.
This also means that you should be careful to escape "\n" that may be inside the JSON elements.
Indented JSON won't work well with hadoop/hive, since to distribute processing, hadoop must be able to tell when a records ends, so it can split processing of a file with N bytes with W workers in W chunks of size roughly N/W.
The splitting is done by the particular InputFormat that's been used, in case of text, TextInputFormat.
TextInputFormat will basically split the file at the first instance of "\n" found after byte i*N/W (for i from 1 to W-1).
For this reason, having other "\n" around would confuse Hadoop and it will give you incomplete records.
As an alternative, I wouldn't recommend it, but if you really wanted you could use a character other than "\n" by configuring the property "textinputformat.record.delimiter" when reading the file through hadoop/hive, using a character that won't be in JSON (for instance, \001 or CTRL-A is commonly used by Hive as a field delimiter) but that can be tricky since it has to also be supported by the SerDe.
Also, if you change the record delimiter, anybody who copies/uses the file on HDFS must be aware of the delimiter, or they won't be able to parse it correctly, and will need special code to do it, while keeping "\n" as a delimiter, the files will still be normal text files and can be used by other tools.
As for the SerDe, I'd recommend this one, with the disclaimer that I wrote it :)
https://github.com/rcongiu/Hive-JSON-Serde