I would like to know what # symbol does in this line of code and [#][] do? This is being used in Ansible. Thank you.
json_query("response.result.job | [#][]")
The whole code:
- name: task1
<removed for simplicity>
cmd: 'show jobs all'
register: all_jobs
until: |
all_jobs is not failed
and (all_jobs.stdout | from_json | json_query("response.result.job|[#][]") | default([], true) | length > 0)
and (all_jobs.stdout | from_json | json_query("response.result.job|[#][]")
| json_query("[?status != 'FIN']") | length == 0)
retries: 60
delay: 30
For those who are curious, I think I found the answer. It is all about jmespath. # is the current node. [] will flat a list. [][] will flat nested lists. [#][] will flat second level list. To understand this please go to this link below. There are examples there.
https://jmespath.org/tutorial.html
Related
I am usually calling other feature and reading data from csv in the examples, like below.
Scenario Outline:
* call read('classpath:controller/Controller.feature')
Examples:
|read('classpath:com/testdata/Test.csv')|
This time I still want to read data from csv, but use Examples for other purpose, like below. Is it possible to read data from csv still? Maybe passing as parameter?
Scenario Outline:
* call read('classpath:controller/Controller.feature'){read('classpath:com/testdata/Test.csv')}
Examples:
|gain |spend |
|12000| 12008 |
|3400 | 4655 |
I know it works this way but I have to pass index [0], and if I have more test data in csv it won't work
Scenario Outline:
* def testData = read('classpath:com/testdata/Test.csv')
* call read('classpath:controller/Controller.feature'){ "name": "#(testData[0].name)", "age": "#(testData[0].age)"}
Examples:
|gain |spend |
|12000| 12008 |
|3400 | 4655 |
I'll just give one tip. When you use Examples the row index is available as a variable called __num: https://github.com/karatelabs/karate#scenario-outline-enhancements
So you can do things like this:
Feature:
Scenario Outline:
* def data = [{ id: 0 }, { id: 1 }]
* match (data[__num].id) == temp
Examples:
| temp! |
| 0 |
| 1 |
I'm trying to make request in jq:
cat testfile.txt | jq 'fromjson | select(.kubernetes.pod.memory.usage.bytes != null) .kubernetes.pod.memory.usage.bytes, ."#timestamp"'
My output is:
"2019-03-15T00:24:21.733Z"
"2019-03-15T00:25:10.169Z"
"2019-03-15T00:24:47.908Z"
105889792
"2019-03-15T00:25:04.446Z"
34557952
"2019-03-15T00:25:04.787Z"
How to delete excess dates?
For example output only:
105889792
"2019-03-15T00:25:04.446Z"
34557952
"2019-03-15T00:25:04.787Z"
You just need to add a pipe after select :
cat testfile.txt | jq 'fromjson | select(.kubernetes.pod.memory.usage.bytes != null) | .kubernetes.pod.memory.usage.bytes, ."#timestamp"'
Here's a DRYer (as in dry) solution:
.["#timestamp"] as $ts | .kubernetes.pod.memory.usage.bytes // empty | ., $ts
Note that this particular use of // assumes that you wish to treat null, false, and a missing key in the same way. If not, you can still use the same idea to stay DRY.
I am building a function that takes a list made up of lists (ex: [['a'],['b'],['c']]) and outputs it as a table. I cannot use pretty table because I need a specific output (ex | a | b | ) with the lines and the spaces exactly alike.
Here is my function:
def show_table(table):
if table is None:
table=[]
new_table=""
for row in range(table):
for val in row:
new_table+= ("| "+val+" ")
new_table+= "|\n"
return new_table.strip("\n")
I keep getting the error:
show_table([['a'],['b'],['c']])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in show_table
TypeError: 'list' object cannot be interpreted as an integer
I'm not sure why there is an issue. I've also gotten an output error where it only outputs the first item in the first list and nothing more. Could someone explain how to use the format function to get rid of this error and output what I want correctly?
Fixed error but still failing tests:
FAIL: test_show_table_12 (main.AllTests)
Traceback (most recent call last):
File "testerl7.py", line 116, in test_show_table_12
def test_show_table_12 (self): self.assertEqual (show_table([['10','2','300'],['4000','50','60'],['7','800','90000']]),'| 10 | 2 | 300 |\n| 4000 | 50 | 60 |\n| 7 | 800 | 90000 |\n')
AssertionError: '| 10| 2| 300|\n| 4000| 50| 60|\n| 7| 800| 90000|' != '| 10 | 2 | 300 |\n| 4000 | 50 | 60 |\n| 7 | 800 | 90000 |\n'
- | 10| 2| 300|
+ | 10 | 2 | 300 |
? +++ +++ +++
- | 4000| 50| 60|
+ | 4000 | 50 | 60 |
? + ++ ++++
- | 7| 800| 90000|+ | 7 | 800 | 90000 |
? ++++ + + +
The problem is here:
for row in range(table):
range takes 1, 2, or 3 integers as arguments. It does not take a list.
You want to use:
for row in table:
Also, check your indents; it looks like the newline addition should be indented more.
Your traceback tells you that the problem occurs on line 5:
for row in range(table):
… so something on that line is trying, without success, to interpret something else as an integer. If we take a look at the docs for range(), we see this:
The arguments to the range constructor must be integers (either built-in int or any object that implements the __index__ special method).
… but table is not an integer; it's a list. If you want to iterate over a list (or something similar), you don't need a special function – simply
for row in range:
will work just fine.
There's another problem with your function apart from the misuse of range(), which is that you've indented too much of your code. This:
if table is None:
table=[]
new_table=""
for row in range(table):
for val in row:
new_table+= ("| "+val+" ")
new_table+= "|\n"
… will only execute any of the indented code if table is None, whereas what you really want is just to set table=[] if that is the case. Fixing up both those problems gives you this:
def show_table(table):
if table is None:
table=[]
new_table = ""
for row in table:
for val in row:
new_table += ("| " + val + " ")
new_table += "|\n"
return new_table.strip("\n")
(I've also changed all your indents to four spaces, and added spaces here and there, to improve the style).
I have a csv file with the foll struct
Name | Val1 | Val2 | Val3 | Val4 | Val5
John 1 2
Joe 1 2
David 1 2 10 11
I am able to load this into an RDD fine. I tried to create a schema and then a Dataframe from it and get an indexOutOfBound error.
Code is something like this ...
val rowRDD = fileRDD.map(p => Row(p(0), p(1), p(2), p(3), p(4), p(5), p(6) )
When I tried to perform an action on rowRDD, gives the error.
Any help is greatly appreciated.
This is not answer to your question. But it may help to solve your problem.
From the question I see that you are trying to create a dataframe from a CSV.
Creating dataframe using CSV can be easily done using spark-csv package
With the spark-csv below scala code can be used to read a CSV
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load(csvFilePath)
For your sample data I got the following result
+-----+----+----+----+----+----+
| Name|Val1|Val2|Val3|Val4|Val5|
+-----+----+----+----+----+----+
| John| 1| 2| | | |
| Joe| 1| 2| | | |
|David| 1| 2| | 10| 11|
+-----+----+----+----+----+----+
You can also inferSchema with latest version. See this answer
Empty values are not the issue if the CSV file contains fixed number of columns and your CVS looks like this (note the empty field separated with it's own commas):
David,1,2,10,,11
The problem is your CSV file contains 6 columns, yet with:
val rowRDD = fileRDD.map(p => Row(p(0), p(1), p(2), p(3), p(4), p(5), p(6) )
You try to read 7 columns. Just change your mapping to:
val rowRDD = fileRDD.map(p => Row(p(0), p(1), p(2), p(3), p(4), p(5))
And Spark will take care of the rest.
The possible solution to that problem is replacing missing value with Double.NaN. Suppose I have a file example.csv with columns in it
David,1,2,10,,11
You can read the csv file as text file as follow
fileRDD=sc.textFile(example.csv).map(x=> {val y=x.split(","); val z=y.map(k=> if(k==""){Double.NaN}else{k.toDouble()})})
And then you can use your code to create dataframe from it
You can do it as follows.
val df = sqlContext
.read
.textfile(csvFilePath)
.map(_.split(delimiter_of_file, -1)
.map(
p =>
Row(
p(0),
p(1),
p(2),
p(3),
p(4),
p(5),
p(6))
Split using delimiter of your file. When you set -1 as limit it consider all the empty fields.
I have an issue and i'm looping on it! :| I hope someone can help me..
So i have an input file (.xls), that is simple but there are a row (lets say its "ROW1") that is like this:
ROW1 | ROW2 | ROW3 | ROW_N
765 | 1 | AAAA-MM-DD | ...
null | 1 | AAAA-MM-DD | ...
null | 1 | AAAA-MM-DD | ...
944 | 2 | AAAA-MM-DD | ...
null | 2 | AAAA-MM-DD | ...
088 | 7 | AAAA-MM-DD | ...
555 | 2 | AAAA-MM-DD | ...
null | 2 | AAAA-MM-DD | ...
There are no stardard here, like you can see.. There are some lines null (ROW1) and in ROW2, there are equal numbers, with different association to ROW1 (like in line 5 and 6, then in line 8 and 9).
My objective is to copy and paste the values from ROW1, in the ROW1 after when is null, till isn't null. Basically is to copy form previous step, when is null...
I'm trying to use the "Formula" step, by using something like:
=IF(AND(ISBLANK([ROW1]);NOT(ISBLANK([ROW2]));ROW_n=ROW1;IF(AND(NOT(ISBLANK([ROW1]));NOT(ISBLANK([ROW2]));ROW_n=ROW1;ROW_n=""));
But nothing yet..
I've tried "Analytic Query" but nothing too..
I'm using just stream a xls file input..
Tks very much, any help is very much appreciiated!!
Best Regardsd!
Well i discover a solution, adding a "User Defined Java Class" with the code below:
import java.util.HashMap;
private FieldHelper output_field, card_field;
private RowSet out, log;
private String previou_card =null;
public boolean processRow(StepMetaInterface smi, StepDataInterface sdi) throws KettleException
{
if (first)
{
first = false;
out = findTargetRowSet("out");
output_field = get(Fields.Out, "previous_card");
} else {
Object[] r = getRow();
if (r == null) {
setOutputDone();
return false;
}
r = createOutputRow(r, data.outputRowMeta.size());
if (previous_card != null) {
output_field.setValue(r, previous_card);
}
if (card_field == null) {
card_field = get(Fields.In, "Grupo de Cartões");
}
String card = card_field.getString(r);
if (card != null && !card.isEmpty()) {
previous_card = card;
}
// Send the row on to the next step.
putRowTo(data.outputRowMeta, r, out);
}
return true;
After this i have to put a few steps but this help very much.
Thank you mates!!
Finally i got result. Please follow below steps
Below image is full transformation screen.
Data Grid Data will be like these. Sorry for that in my local i don't have Microsoft because of that i took Data Grid. Instead of Data Grid you can drag and drop Microsoft Excel Input step.
Drag and Drop one java script step and write below code.
Last step of transformation, drag and drop Select values step and select the columns.( These step is no necessary)
Final result will be like these.
Hope this helps.