Carriage return (\r) operator does not work when run on gitlab shared runner - gitlab-ci-runner

I built a progress bar for a python application I developed. It is expected to look something like below:
[██████████████ ] 70%
This works fine when run in local terminal and in docker containers built from my machine. It however, doesn't display as expected when run in gitlab shared runner. Instead of getting a single line that shows progress so far, it displays multiple lines. Something like below
[█████████████ ] 68%
[█████████████ ] 68%
[█████████████ ] 68%
[█████████████ ] 68%
[█████████████ ] 69%
[█████████████ ] 69%
[█████████████ ] 69%
[█████████████ ] 69%
[█████████████ ] 69%
[██████████████ ] 70%
[██████████████ ] 70%
In order to understand the possible reasons, I tried to find out terminal properties of shared runner containers. So I executed below commands, all of which returned errors. So need help fixing the problem.
$ stty size
stty: 'standard input': Inappropriate ioctl for device
$ tput cols
tput: No value for $TERM and no -T specified
My gitlab-ci.yml file looks like below
run-project:
image: python:3.6
script:
- stty size
- python3 Test.py
Below is sample code I am using to show progress bar:
import sys
i = 1
sys.stdout.write('Start')
for k in range(100000):
i += 1
sys.stdout.write('\r')
j = (i/100000)
sys.stdout.write("[%-20s] %d%%" % ('█'*int(20*j), int(100*j)))
What I want is to see a single line that displays progress of task instead of terminal displaying hundreds of line each displaying progress so far (Which is my intent for using carriage return operator)

This is now resolved in gitlab and carriage return operator works!
Task status:
[████████████████████] 100%

Related

Sikuli IDE command wait("image") not waiting for image to appear before script continues

I am new to Sikuli and am trying it out with a very simple script that looks like this...
wait and click cmds are used and they are working, The issue i am facing is wait("1513068228396.png",3600) is not waiting until image appears,, it waits for some 10 to 15 secs and executes next cmd. I tried including some Logs, and also tried with other images to same wait cmd, still same result.
wait("1513067960826.png",60)
click(Pattern("1513066493827.png").targetOffset(-106,2))
sleep(2)
click("1513066637741.png")
sleep(1)
click("1513599247108.png")
sleep(5)
print "wait for my image"
wait("1513068228396.png",3600) # Facing issue in this line
print "found my image"
outputLog :
wait for my image
[debug] Region: find: waiting 3600.0 secs for 1513068228396.png to appear in R[0,0 1920x1080]#S(0)
[debug] Image: reused: 1513068228396.png (file:/D:/softwares/sikuli/SENINFO_V100R002C00SPC700.sikuli/1513068228396.png)
[debug] Region: checkLastSeen: not there
[debug] Region: find: 1513068228396.png has appeared at M[832,379 30x16]#S(S(0)[0,0 1920x1080]) S:0.70 C:847,387 [753 msec]
found my image
Any suggestion how to solve this issue.
Maybe that image have a similarity with some region in the screen. You could try to set the similarity to the highest value:
wait(Pattern("some_image.png").similar(0.8),) # if you want the 80% of similarity
wait(Pattern("some_image.png").exact()) # if you want the 100% of similarity
Also, I encourage you to use if exists instead of wait. Wait will end the program if the image doesn't exist:
if exists(Pattern("some_image.png").exact(),3600):
click("some_image.png")
You can find Pattern documentation here
The wait(pattern, 3600) is equivalent to wait(pattern, FOREVER) which is described here and will wait for the pattern indefinitely. In case like yours, the only thing that can explain this behavior is if the pattern was actually found on the screen and the below line confirms that:
Region: find: 1513068228396.png has appeared at M[832,379
30x16]#S(S(0)[0,0 1920x1080]) S:0.70 C:847,387 [753 msec]
Perhaps this pattern appears elsewhere and you missed it? Or maybe the similarity parameter is too low and another pattern gets recognized. To check that try to use the highlight(1) method.
ptrn = find("pattern.png")
ptrn.highlight(1)
This might shed some light.

How to execute 'Zoom Fit' in ModelSim/QuestaSim from TCL console?

I'm using ModelSim / Questa-SIM from command line in GUI mode. If ModelSim runs in GUI mode I would like to execute a 'Zoom Fit' from my imported 'wave.do' file.
I pass this file to vsim by -do wave.do. Here is the script:
add wave *
run -all
I started vsim and saved a waveform window as test.do. This file contains statements like this:
WaveRestoreZoom {0 fs} {2724750 ps}
Is it possible to calculate the upper boundary in TCL?
I also found a simtime statement, but using simtime as a second parameter gives an error:
VSIM1> simtime
# {5,195 ns} {1 } /arith_counter_gray_tb 0 0
VSIM1> WaveRestoreZoom {0 fs} {simetime}
# zoomrange: invalid range "0 fs simetime"
If I understand what you're trying to do correctly, wave zoom full works for me.
Your technique works if you use WaveRestoreZoom {0 fs} [simtime]. By putting simtime in curly braces, you're asking for it to be treated as a literal string. Square brackets ask it to try to evaluate the expression within. You could equally use WaveRestoreZoom {0 fs} [eval simtime].

How to run a TCL script to tell run in every 10 minutes?

My TCL script:
source reboot_patch.tcl
set a 1
while {$a < 10} {
exec reboot_patch.tcl
after 60000
incr a
}
I need to run "reboot_patch.tcl" script for every 1 min in my system. I wrote above script. But its running only once and its coming out.
Following is the "reboot_patch.tcl" script:
#!/usr/bin/tcl
package require Expect
spawn telnet 40.1.1.2
expect "*console."
send "\r"
expect "*ogin:"
send "test\r"
expect "*word:"
send "test\r"
expect "*>"
send "clear log\r"
expect "*#"
send "commit \r"
expect "*#"
Please suggest me a way to achieve this.
Thanks in advance.
Script to print numbers from 1 to 10 in windows 7:
#!c:\Tcl\bin\tclsh
set a 1
while { $a < 11} {
puts $a
incr a
}
I am unable to run the above script using "./" format in windows7.
In general, exec command will return the output of program execution. It is our responsibility to capture and print and manipulate it.
You have to print it manually like
puts [ exec ./reboot_patch.tcl ]
Or like,
set result [ exec ./reboot_patch.tcl ]
puts $result
Since you are using exec without printing it's result, you have not seen anything. Then how come it got executed for the first time ? Who else can do except the following ?
source reboot_patch.tcl
Well, Since you have sourced the file and it got executed which seemed to be the first time execution but which is not actually from exec command.
Note : If you are calling any of that sourced file's proc, then only it is required to source it. As far as I can see you are not having any proc there. So, source is not required at all.

Next free device option for qemu-nbd

Is there an option for the qemu-nbd command to get the next free, i.e. unused NBD like losetup -f does? The manpage of 0.0.1 (which is version of the currently stable release 1.7.0 of qemu) doesn't mention anything.
You can query attributes about nbd devices in sysfs.
For example:
cat /sys/class/block/nbd0/size
Will return 0, or the size of the mapped image file otherwise, if /dev/ndb0 is in use.
So you could iterate each device until you find one with 0 and attempt to try that with qemu-nbd.
Something like this should do it:
for x in /sys/class/block/nbd* ; do
S=`cat $x/size`
if [ "$S" == "0" ] ; then
qemu-nbd -c /dev/`basename $x` some_file.img
break
fi
done

Is there any way in Elasticsearch to get results as CSV file in curl API?

I am using elastic search.
I need results from elastic search as a CSV file.
Any curl URL or any plugins to achieve this?
I've done just this using cURL and jq ("like sed, but for JSON"). For example, you can do the following to get CSV output for the top 20 values of a given facet:
$ curl -X GET 'http://localhost:9200/myindex/item/_search?from=0&size=0' -d '
{"from": 0,
"size": 0,
"facets": {
"sourceResource.subject.name": {
"global": true,
"terms": {
"order": "count",
"size": 20,
"all_terms": true,
"field": "sourceResource.subject.name.not_analyzed"
}
}
},
"sort": [
{
"_score": "desc"
}
],
"query": {
"filtered": {
"query": {
"match_all": {}
}
}
}
}' | jq -r '.facets["subject"].terms[] | [.term, .count] | #csv'
"United States",33755
"Charities--Massachusetts",8304
"Almshouses--Massachusetts--Tewksbury",8304
"Shields",4232
"Coat of arms",4214
"Springfield College",3422
"Men",3136
"Trees",3086
"Session Laws--Massachusetts",2668
"Baseball players",2543
"Animals",2527
"Books",2119
"Women",2004
"Landscape",1940
"Floral",1821
"Architecture, Domestic--Lowell (Mass)--History",1785
"Parks",1745
"Buildings",1730
"Houses",1611
"Snow",1579
I've used Python successfully, and the scripting approach is intuitive and concise. The ES client for python makes life easy. First grab the latest Elasticsearch client for Python here:
http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/#python
Then your Python script can include calls like:
import elasticsearch
import unicodedata
import csv
es = elasticsearch.Elasticsearch(["10.1.1.1:9200"])
# this returns up to 500 rows, adjust to your needs
res = es.search(index="YourIndexName", body={"query": {"match": {"title": "elasticsearch"}}},500)
sample = res['hits']['hits']
# then open a csv file, and loop through the results, writing to the csv
with open('outputfile.tsv', 'wb') as csvfile:
filewriter = csv.writer(csvfile, delimiter='\t', # we use TAB delimited, to handle cases where freeform text may have a comma
quotechar='|', quoting=csv.QUOTE_MINIMAL)
# create column header row
filewriter.writerow(["column1", "column2", "column3"]) #change the column labels here
for hit in sample:
# fill columns 1, 2, 3 with your data
col1 = hit["some"]["deeply"]["nested"]["field"].decode('utf-8') #replace these nested key names with your own
col1 = col1.replace('\n', ' ')
# col2 = , col3 = , etc...
filewriter.writerow([col1,col2,col3])
You may want to wrap the calls to the column['key'] references in try / catch error handling, since documents are unstructured, and may not have the field from time to time (depends on your index).
I have a complete Python sample script using the latest ES python client available here:
https://github.com/jeffsteinmetz/pyes2csv
You can use elasticsearch head plugin.
You can install from elasticsearch head plugin
http://localhost:9200/_plugin/head/
Once you have the plugin installed, navigate to the structured query tab, provide query details and you can select 'csv' format from the 'Output Results' dropdown.
I don't think there is a plugin that will give you CSV results directly from the search engine, so you will have to query ElasticSearch to retrieve results and then write them to a CSV file.
Command line
If you're on a Unix-like OS, then you might be able to make some headway with es2unix which will give you search results back in raw text format on the command line and so should be scriptable.
You could then dump those results to text file or pipe to awk or similar to format as CSV. There is a -o flag available, but it only gives 'raw' format at the moment.
Java
I found an example using Java - but haven't tested it.
Python
You could query ElasticSearch with something like pyes and write the results set to a file with the standard csv writer library.
Perl
Using Perl then you could use Clinton Gormley's GIST linked by Rakesh - https://gist.github.com/clintongormley/2049562
Shameless plug. I wrote estab - a command line program to export elasticsearch documents to tab-separated values.
Example:
$ export MYINDEX=localhost:9200/test/default/
$ curl -XPOST $MYINDEX -d '{"name": "Tim", "color": {"fav": "red"}}'
$ curl -XPOST $MYINDEX -d '{"name": "Alice", "color": {"fav": "yellow"}}'
$ curl -XPOST $MYINDEX -d '{"name": "Brian", "color": {"fav": "green"}}'
$ estab -indices "test" -f "name color.fav"
Brian green
Tim red
Alice yellow
estab can handle export from multiple indices, custom queries, missing values, list of values, nested fields and it's reasonably fast.
If you are using kibana (app/discover in general), you can make your query in the UI, then save it and share -> CSV Reports. This creates a csv with a line for each record and columns will be comma separated
I have been using https://github.com/robbydyer/stash-query stash-query for this.
I find it quite convenient and working well, though i struggle with the install every time I redo it (this is due to me not being very fluent with gem's and ruby).
On Ubuntu 16.04 though, what seemed to work was:
apt install ruby
sudo apt-get install libcurl3 libcurl3-gnutls libcurl4-openssl-dev
gem install stash-query
and then you should be good to go
Installs Ruby
Install curl dependencies for Ruby, because the stash-query tool is working via the REST API of elasticsearch
Installs stash query
This blog post describes how to build it as well:
https://robbydyer.wordpress.com/2014/08/25/exporting-from-kibana/
you can use elasticsearch2csv is a small and effective python3 script that uses Elasticsearch scroll API and handle a big query response.
You can use GIST. Its simple.
Its in Perl and you can get some help from it.
Please download and see the usage on GitHub. Here is the link.
GIST GitHub
Or if you want in Java then go for elasticsearch-river-csv
elasticsearch-river-csv