Making all the decimals equal with sed [closed] - csv

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have a csv file with this structure:
123;rr;2;RRyO, chess mobil;pio;25.766;1;0;24353;21.6;;S
1243;rho;9;RpO, chess yext cat;downpio;67.98;1;0;237753;25.34600;;S
I want all the numbers of a specific column to have only 2 decimals (adding or removing decimals).
With this output
123;rr;2;RRyO, chess mobil;pio;25.766;1;0;24353;21.60;;S
1243;rho;9;RpO, chess yext cat;downpio;67.98;1;0;237753;25.34;;S
I have tried this, but doesnt work
sed 's/[[:digit:]]*\.//g' data.csv
Any idea?
Maybe a script is needed?

Perl to the rescue!
perl -F\; -ne '$F[9] = sprintf "%.2f", $F[9]; print join ";", #F' -- file.csv
Note that it will set the value on line 2 to 25.35, not 25.34, as that's how %f rounds 25.346.
You can use
$F[9] = sprintf "%.2f", int($F[9] * 100) / 100
to get the output you want.
In sed, you need to distinguish the two cases: there's only a single deciaml, or there're more than two.
sed -E 's/(;[0-9]+)\.([0-9])(;[^;]*;[^;]*)$/\1.\20\3/' \
-E 's/(;[0-9]+)\.([0-9]{2})[0-9]+(;[^;]*;[^;]*)$/\1.\2\3/'

$ awk '{$(NF-2) = sprintf( "%0.2f", $(NF-2))}1' FS=\; OFS=\; input
123;rr;2;RRyO, chess mobil;pio;25.766;1;0;24353;21.60;;S
1243;rho;9;RpO, chess yext cat;downpio;67.98;1;0;237753;25.35;;S

This might work for you (GNU sed):
sed -E 's/^/;/;s/;[0-9]*\.[0-9]*/&00/g;s/(;[0-9]*\.[0-9]{2})[^;]*/\1/g;s/.//' file
Prepend a csv delimiter to the start of the line so that global regexp will match successfully.
If a field looks like a decimal; append two 0's.
If a field looks like a decimal; shorten it to two decimal places.
Remove the introduced csv delimiter.
N.B. This does not account for rounding.
If rounding is required perhaps:
sed -E 's/^/;/;s/;([0-9]*\.[0-9]*)/$(printf ";%.2f" \1)/g;s/.(.*)/echo "\1"/e' file

Related

Remove the comma in the first column in .csv file using bash code [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 months ago.
Improve this question
I have a data as follows;
12432,20230
I want to remove the comma and replace it with space and want the output as follows;
12432 20230
I used the following code;
sed ’s/,/ /g’ file.csv > file.geno
but it gives error as;
sed: -e expression #1, char 1: unknown command: `�'
Your code is using "smart quotes", ’, instead of single quotes, ', just fix that and your syntax error will go away, i.e. instead of:
sed ’s/,/ /g’ file.csv > file.geno
use:
sed 's/,/ /g' file.csv > file.geno
You don't need the g at the end btw since you only have 1 comma in your input.

how to write a script to generate a bill summary if user entered product id and quantity in tcl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
#!/usr/bin/tclsh
set fp [open "tcldata.txt" a+]
set file_data [read $fp]
close $fp
puts "enter product id: \n";
#gets user id given by user.
gets STDIN a
puts "enter quantity: \n";
#gets quantity given by user.
gets STDIN b
set id_row ()
grep read product_id [$file_data]
set product_array = split ('',$id_row);
puts "----------"
puts [llength $fp]
puts "----------"
First, documentation for Tcl commands is at http://www.tcl-lang.org/man/tcl8.6/TclCmd/contents.htm
opening a file a+ starts reading at the bottom of the file, so the file_data variable will be empty. To read from a file use access r. See the open command
standard input is the lowercase stdin -> gets stdin productId
set id_row () -- parentheses have no special meaning in Tcl. This commands stores a 2-character string into the id_row variable.
grep read product_id [$file_data] -- what are you trying to do here? It would help if you put some sample data into your question
set product_array = split ('',$id_row);
don't use =, the set command takes at most 2 arguments
Tcl commands don't use parentheses around their arguments, nor commas to separate arguments, just whitespace. See the Tcl syntax rules, particularly the first 3.
llength $fp -- the fp variable is a (closed) file descriptor. What are you trying to do here?

Run sequentially cir and net files with tcl/tk [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I would like to give me your advice about using cadence orcad
so I can run sequentially cir or net(netlist) files with pspice.exe in cmd of my pc.
I use tcl/tk language.I have tried a few things without any results.
I want to make something similar to this one:
set top {C:\Users\file1.net C:\Users\file2.net};
foreach a $top
{exec D:\\tools\\bin\\pspice.exe -r $a}
There are two problems in your code.
The first problem is that \f is an escape character sequence in lists (for “go down one line”, IIRC; point is you don't want that interpretation). The second problem is that you've got your brace placement wrong in your foreach.
The first problem is best addressed by using / instead of \, and then using file nativename on the value fed to the OS. (You have to do that manually for argument to executables in expr; Tcl can't fix that for you entirely automatically.) The second problem is just a syntax error.
Try this:
set top {C:/Users/file1.net C:/Users/file2.net}
set pspice D:/tools/bin/pspice.exe
foreach a $top {
# Tcl knows how to convert executable names for you; not the other args though
exec $pspice -r [file nativename $a]
}
On Windows you may also try:
package require twapi
set top {C:/Users/file1.net C:/Users/file2.net}
foreach a $top {
twapi::shell_execute -path [file nativename $a]
}
This will work only if *.net files are already associated with PSpice application.
The code above rely on TWAPI extension (if you have it) and its shell_execute function, to open a document just like double-click works.
It's always a good idea to avoid backslashes in your code (no need to put it twice to escape them), file nativename will do the job for you.
Source: https://twapi.magicsplat.com/v4.5/shell.html#shell_execute

Prettify a one-line JSON file [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I downloaded a 203775480 bytes (~200 MiB, exact size is important for a later error) JSON file which has all entries all on one line. Needless to say, my text editor (ViM) cannot efficiently navigate in it and I'm not able to understand anything from it. I'd like to prettify it. I tried to use cat file.json | jq '.', jq '.' file.json, cat file.json | python -m json.tool but none worked. The former two commands print nothing on stdout while the latter says Expecting object: line 1 column 203775480 (char 203775479).
I guess it's broken somewhere near the end, but of course I cannot understand where as I cannot even navigate it.
Have you got some other idea for prettifying it? (I've also tried gg=G in ViM: it did not work).
I found that the file was indeed broken: I accidentally noticed a ']' at the beginning of the file so I struggled to go to the end of the file and added a ']' at the end (it took me maybe 5 minutes).
Then I've rerun cat file.json | python -m json.tool and it worked like a charm.

insert ", " before 2nd occurrence of a particular word in string in unix [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want to insert ", " before every occurrence of a https, except first https in the below JSON
input:
https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp check
https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp dir
https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp dir
Output:
https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp check", "https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp dir", "https://gitlab.com/pc-sa/sa-pc-pcc/-/archive/test-demo/abc-test-demo.zip?path=Documentation Assets/temp dir
EDIT: Adding OP's shown efforts(in comments) in post here.
cat /tmp/mmm.txt| sed ':a;N;$!ba; s/https/\","https/2'
Could you please try following (written and tested with provided samples).
awk -v s1="\",\"" '{val=(val?val s1:"")$0} END{print val}' Input_file
Explanation: Adding a detailed level of explanation for above code.
awk -v s1="\",\"" ' ##Starting awk program here and creating variable s1 whose value is ","
{ ##Starting this code main BLOCK from here.
val=(val?val s1:"")$0 ##Creating variable val whose value is keep concatenating to its own value.
} ##Closing this program main BLOCK here.
END{ ##Starting END block of this program from here.
print val ##Printing variable val here.
} ##Closing this program END block here.
' Input_file ##Mentioning Input_file name here.
To save output into shell variable and read values from shell variable try like (where var is your shell variable having values, you could name it as per your wish):
var=$(echo "$var" | awk -v s1="\",\"" '{val=(val?val s1:"")$0} END{print val}'); echo "$var"
I would change all occurrences, then change the first one back:
sed -e 's/https/", "https/g' -e 's/", "https/https/'
(Your examples don't quite match your description of the goal, so I can't test this solution but I think it's at least roughly correct.)
awk 'BEGIN{ORS=','}1' file
This will give you an extra comma at the end
awk '{printf (FNR==1?"":",")"%s", $0}' file
This will not