How to print multi line text box value without '\n' by tkinter - mysql

I want to print textbox value without '\n' from tkinter,
When I print then it show one line and next line valu adding with "\n', actually I fetch the valu from mysql

To remove a \n from the end of a string, use rstrip('\n'):
lines = [
"Hello, world!\n",
"No newline",
"", # empty
"Two\nNewlines\n",
"Intermediate\nNewline",
]
for line in lines:
s = line.rstrip("\n")
print("-" * 20)
print(s)
# gives
"""
--------------------
Hello, world!
--------------------
No newline
--------------------
--------------------
Two
Newlines
--------------------
Intermediate
Newline
"""

Related

remove comma in jsonpath template using bash

I have a JSON path template query.
oc get event jsonpath='{range .items[*]},{#.name}{","}{#.message}{","}{#.evenname}'{"\n"}{end}'> /tmp/test.csv
i'm redirecting it to csv.
name1,message of event one,eventname1
name2,message of,event,two,eventname2
name3,message of event three,eventname3
name4,message of, event four,eventname4
getting comma in a message from above output , i want to replace the comma with space for the second column(message) in the bash script.
Anyone has any thoughts on how can achieve this.
Expected result
name1,message of event one,eventname1
name2,message of event two,eventname2
name3,message of event three,eventname3
name4,message of event four,eventname4
Assuming you can change the field delimiter to a character known to not exist in the data (eg, |), you would now be generating:
name1|message of event one|eventname1
name2|message of,event,two|eventname2
name3|message of event three|eventname3
name4|message of, event four|eventname4
From here we can use sed to a) remove/replace , with <space> and then b) replace | with ,:
$ sed 's/[ ]*,[ ]*/,/g;s/,/ /g;s/|/,/g'
NOTE: the s/[ ]*,[ ]*/g is needed to address the additional requirement of stripping out repeating spaces (as would occur in line #4 if we replace , with <space>)
When applied to the data this generates:
name1,message of event one,eventname1
name2,message of event two,eventname2
name3,message of event three,eventname3
name4,message of event four,eventname4
Another option using awk (for OP's current data using the , as the field delimiter):
awk -F',' ' # input field delimiter = ","
{ x=$1"," # start new string as field #1 + ","
sep="" # initial separator = "" for fields 2 to (NF-1)
for (i=2;i<NF;i++) { # loop through fields 2 to (NF-1)
gsub(/^[ ]+|[ ]+$/,"",$i) # trim leading/trailing spaces
x=x sep $i # append current field to x along with sep
sep=" " # use " " as separator for rest of fields
}
printf "%s,%s\n", x, $NF # print "x" plus "," plus the last field (NF)
}'
When applied to the data this generates:
name1,message of event one,eventname1
name2,message of event two,eventname2
name3,message of event three,eventname3
name4,message of event four,eventname4

Replacing a few sensitive characters in fields with XXX-masked fields in UNIX

I have a table which has been exported to a file in UNIX which has data in CSV format like for e.g.:
File 1:
ACCT_NUM,EXPIRY_DT,FIRST_NAME,LAST_NAME
123456,09-09-2019,Prisi,Kumar
Now I need to mask ACCT_NUM and FIRST_NAME and replace the masked values in File 1, the output should look something like this
File 2:
ACCT_NUM,EXPIRY_DT,FIRST_NAME,LAST_NAME
123XXX,09-09-2019,PRXXX,Kumar
I have separate masking functions for numerical and string fields, I need to know how to replace the masked columns in the original file.
I'm not sure what you want to do with FNR and what the point of assigning to array a should be. This is how I would do it:
$ cat x.awk
#!/bin/sh
awk -F, -vOFS=, ' # Set input and output field separators.
NR == 1 { # First record?
print # Just output.
next # Then continue with next line.
}
NR > 1 { # Second and subsequent record?
if (length($1) < 4) { # Short account number?
$1 = "XXX" # Replace the whole number.
} else {
sub(/...$/, "XXX", $1) # Change last three characters.
}
if (length($3) < 4) { # Short first name number?
$3 = "XXX" # Replace the whole name.
} else {
sub(/...$/, "XXX", $3) # Change last three characters.
}
print # Output the changed line.
}'
Showtime!
$ cat input
ACCT_NUM,EXPIRY_DT,FIRST_NAME,LAST_NAME
123456,09-09-2019,Prisi,Kumar
123,29-12-2017,Jim,Kirk
$ ./x.awk < input
ACCT_NUM,EXPIRY_DT,FIRST_NAME,LAST_NAME
123XXX,09-09-2019,PrXXX,Kumar
XXX,29-12-2017,XXX,Kirk

MySQL space comparsion hell

Why this query:
SELECT
"hello" = " hello",
"hello" = "hello ",
"hello" <> "hello ",
"hello" LIKE "hello ",
"hello" LIKE "hello%"
returns me these results:
"hello" = " hello" -> 0
"hello" = "hello " -> 1
"hello" <> "hello " -> 0
"hello" LIKE "hello " -> 0
"hello" LIKE "hello%" -> 1
In particular, I was expecting "hello" = "hello " to be false and "hello" <> "hello " to be true (the LIKE in this case, behaves exactly as I wanted).
Why MySQL compares spaces in such an arbitrary and inconsistent way ? (such as returning 0 on "hello" = " hello" AND 1 on "hello" = "hello ").
Is there any way to configure MySQL to work ALWAYS in "strict mode" (in other words, to make it always behave like LIKE for the varchar/text comparsion) ?
Sadly i'm using a proprietary framework and I cannot force it to always use the LIKE in queries for text comparsions or trimming all the inputs.
"hello" = " hello" -- 0,
Because it is not an exact match
"hello" = "hello " -- 1,
Because trailing spaces are ignored for varchar types
"hello" <> "hello " -- 0,
Because trailing spaces are ignored for varchar types
"hello" LIKE "hello " -- 0,
Because trailing spaces are ignored for varchar types and LIKE performs matching on a per-char basis
"hello" LIKE "hello%" -- 1,
Because it is a partial pattern matching
If you want strict checking, you can use binary on values to be compared.
mysql> select binary('hello')=binary('hello ') bin_eq, 'hello'='hello ' eq;
+--------+----+
| bin_eq | eq |
+--------+----+
| 0 | 1 |
+--------+----+
Refer to:
MySQL: Comparison Functions and Operators
The trailing spaces are omitted if the column is of type char or varchar. See the discussion in the documentation.
All MySQL collations are of type PADSPACE. This means that all CHAR, VARCHAR, and TEXT values in MySQL are compared without regard to any trailing spaces. “Comparison” in this context does not include the LIKE pattern-matching operator, for which trailing spaces are significant.

(sed/awk) Extract values from text to csv file - even/odd lines pattern

I need to export some numeric values from a given ASCII text file and export it in a specific formatted csv file. The input file has got the even / odd line pattern:
SCF Done: E(UHF) = -216.432419652 A.U. after 12 cycles
CCSD(T)= -0.21667965032D+03
SCF Done: E(UHF) = -213.594303492 A.U. after 10 cycles
CCSD(T)= -0.21379841974D+03
SCF Done: E(UHF) = -2.86120139864 A.U. after 6 cycles
CCSD(T)= -0.29007031339D+01
and so on
I need the odd line value in the 5th column and the even line 2nd column value. They should be printed in a semicolon seperated csv file, with 10 values in each row. So the output should look like
-216.432419652;-0.21667965032D+03;-213.594303492;-0.21379841974D+03;-2.86120139864;-0.29007031339D+01; ...linebreak after 5 pairs of values
I started with awk '{print $5}' and awk '{print $2}', however I was not successful in creating a pattern that just acts on even/odd lines.
A simple way to do that?
The following script doesn't use a lot of the great power of awk, but will do the job for you and is hopefully understandable:
NR % 2 { printf "%s;", $5 }
NR % 2 == 0 { printf "%s;", $2 }
NR % 10 == 0 { print "" }
END { print "" }
Usage (save the above as script.awk):
awk -f script.awk input.txt
Given a file called data.txt, try:
awk '/SCF/{ printf $5 ";"; } /CCSD/{ printf($2); } NR % 10 == 0 { printf "\n"; }' data.txt
Something like this could work -
awk '{x = NF > 3 ? $5 : $2 ; printf("%s;",x)}(NR % 10 == 0){print OFS}' file
|_____________________| |________| |___________||_________|
| | | |
This is a `ternary operator`, Print with `NR` is a `OFS` is another built-in
what it does is checks the line formatting a built-in that has a default value of
for number of fields (`NF`). If to add that keeps `\n`
the number of fields is more than a ";" track of
3, we assign $5 value to variable x number of lines.
else we assign $2 value We are using modulo
operator to check when
10 lines are crossed.
This might work for you:
tr -s ' ' ',' <file | paste -sd',\n' | cut -d, -f5,11 | paste -sd',,,,\n'

Making csv from txt files

I have a lot of txt files like this:
Title 1
Text 1(more then 1 line)
And I would like to make one csv file from all of them that it will look like this:
Title 1,Text 1
Title 2,Text 2
Title 3,Text 3
etc
How could I do it? I think that awk is good for it but don't know how to realize it.
May I suggest:
paste -d, file1 file2 file3
To handle large numbers of files, max 40 per output file (untested, but close):
xargs -n40 files... echo >tempfile
num=1
for line in $(<tempfile)
do
paste -d, $line >outfile.$num
let num=num+1
done
This is approximately what you posted with some improvements.
for text in *
do
awk 'BEGIN {q="\""; print q}
NR==1 {
gsub(" "," ") # why?
gsub("Title: *","")
print
}
NR>1 {
gsub(" "," ") # why?
gsub("Content: *","")
gsub(q,q q)
print
}
END {print q}' "$text" >> ../final
done
Edit:
If you have a bunch of files that consist of only two lines, try this:
sed 'N;s/\n/,/' file*.txt
If the files contain more than two lines each then it will put each pair of lines on the same line separated by a comma.
Given 3 files containing the following data:
file1.txt
Heading 1
Text 1
Text 2
file2.txt
Heading 2
Text 1
file3.txt
Heading 3
Text 1
text 2
Text 3
The expected results are:
Heading 1,Text 1,Text 2
Heading 2,Text1
Heading 3,Text 1,text 2,Text 3
This is accomplished using the program createcsv.awk below invoked as
gawk -f createcsv.awk file1.txt file2.txt file3.txt
createcsv.awk
{
if (1 == FNR) {
# It is the first line of a new file
if (csvline != "") {
# First file or empty files we can ignore
print csvline;
}
csvline = "";
delimiter = "";
}
csvline = csvline delimiter $0;
if ("" == delimiter) { delimiter="," }
}
END{
print csvline;
}