Razor, output values from comma separated list - razor

New to razor syntax. I have a comma separated list, #item.profilePosition that appears like this: item1,item2,item3.
I need to output the list in a class like this class="item1 item2 item3. Would I need to create an array with split?

#string.Join(" ", item.profilePosition.Split(','))

Assuming that #item.profilePosition is a string, could you just replace the commas with spaces?
class="#item.profilePosition.Replace(",", " ")"

Related

Parse JSON request with REGEX

I'd like to parse the JSON output from an IEX Cloud stock quote query: https://cloud.iexapis.com/stable/stock/aapl/quote?token=YOUR_TOKEN_HERE
I have tired to use Regex101 to solve the issue:
https://regex101.com/r/y8i01T/1/
Here is the Regex expression that I tried:"([^"]+)":"?([^",\s]+)
Here is the example of a IEX Cloud stock quote output for Apple:
{
"symbol":"AAPL",
"companyName":"Apple, Inc.",
"calculationPrice":"close",
"open":204.86,
"openTime":1556285400914,
"close":204.3,
"closeTime":1556308800303,
"high":205,
"low":202.12,
"latestPrice":204.3,
"latestSource":"Close",
"latestTime":"April 26, 2019",
"latestUpdate":1556308800303,
"latestVolume":18604306,
"iexRealtimePrice":204.34,
"iexRealtimeSize":48,
"iexLastUpdated":1556308799763,
"delayedPrice":204.3,
"delayedPriceTime":1556308800303,
"extendedPrice":204.46,
"extendedChange":0.16,
"extendedChangePercent":0.00078,
"extendedPriceTime":1556310657637,
"previousClose":205.28,
"change":-0.98,
"changePercent":-0.00477,
"iexMarketPercent":0.030716437366704246,
"iexVolume":571458,
"avgTotalVolume":27717780,
"iexBidPrice":0,
"iexBidSize":0,
"iexAskPrice":0,
"iexAskSize":0,
"marketCap":963331704000,
"peRatio":16.65,
"week52High":233.47,
"week52Low":142,
"ytdChange":0.29512900000000003
}
I want to save the key value pairs in the JSON response without quotes around the key and gather the value starting after the colon (:). I need to exclude any quotes for text, the comma at the end of each line and include the last key-value pair that does not include a comma at the end of the line.
For example, "peRatio":16.65, should have the key equal to peRatio and the value equal to 16.65. Or another example, "changePercent":-0.00477, should have a key equal to changePercent and a value of -0.00477. If it's a text such as "companyName":"Apple, Inc.",, it should have a key equal to companyName and a value equal to Apple, Inc.
Also, the last JSON key-value entry: "ytdChange":0.29512900000000003 does not have a comma and that needs to be accounted for.
You most likely do not need to parse your data using regex. However, if you wish/have to do so, maybe for practicing regular expressions, you could do so by defining a few boundaries in your expression.
This RegEx might help you to do that, which divides your input JSON values into three categories of string, numeric, and last no-comma values:
"([^"]+)":("(.+)"|(.+))(,{1}|\n\})
Then, you can use the \n} boundary for the last value, "" boundary for your string values and no boundary for numeric values.

JSON numbers formatted with commas

How can I take some JSON data that contains a number and insert commas in the numbers?
Example: I fetch some JSON data from a url and can display it, it contains a number. let's say 100000. (100,000). It doesn't have a comma to better show 100,000.
language used: Angular 6 (Typescript)
There's many ways to do this, pick your poison:
Intl Number Format
var formatter = new Intl.NumberFormat();
formatter.format(number);
Reg-ex:
function addThousandsSeparator(n) {
return n.replace(/\B(?=(\d{3})+(?!\d))/g, ",")
}
Numeral.js
numeral(number).format('0,0')
Number.toLocaleString("en-US") should insert commas, the way you want it to.
Number("100000").toLocaleString("en-US")
// "100,000"

PySpark write CSV quote all non-numeric

Is there a way to quote only non-numeric columns in the dataframe when output to CSV file using df.write.csv('path')?
I know you can use the option quoteAll=True to quote all the columns but I only want to quote the string columns.
I am using PySpark 2.2.0.
I only want to quote the string columns.
There is currently no parameter in write.csv that you can use to specify which columns to quote. However, one workaround is to modify your string columns by adding quotes around the values.
First identify the string columns by iterating over the dtypes
string_cols = [c for c, t in df.dtypes if t == "string"]
Now you can modify these columns by adding a quote as a prefix and suffix:
from pyspark.sql.functions import col, lit, concat
cols = [
concat(lit('"'), col(c), lit('"')) if c in string_cols else col(c)
for c in df.columns
]
df = df.select(*cols)
Finally write out the csv:
df.write.csv('path')

Split an expression in RDLC report

Tool: Microsoft Visual Studio 2013
I have an RDLC textbox expression where I want to split it based on ',' separated values and display those values in new line. For Example,
Value : Abc, Xyz, STU
The above value need to be displayed as :
Abc
Xyz
STU
I have tried the below expression:
IIf((Split(Parameters!rpField.Value,",").Length = 2),
Split(Parameters!rpField.Value, ",").GetValue(0) +System.Environment.NewLine+ Split(Parameters!rpField.Value,",").GetValue(1), "")
The result is #Error.
How can I accomplish this in SSRS?
Using the same structure you proposed, this worked for me:
=Split(CStr(Parameters!rpField.Value), ",").GetValue(0)
It looks like you just want to replace the commas with new lines, if you want them all in the same textbox?
If that is the case, you can simply use replace:
=replace("Abc, Xyz, STU", ", ", vbcrlf)
Have done it using Instr function and replaced ',' with NewLine as below:
=IIF(Parameters!rpField.Value <> "" ,iif(Instr(Parameters!Field.Value, ",") > 0 ,
" "+Replace(Parameters!rpField.Value,",",System.Environment.NewLine) +System.Environment.NewLine,
" "+Parameters!Field.Value+ System.Environment.NewLine) ,"")
Try this:
=JOIN(Split(Parameters!rpField.Value,","), System.Environment.NewLine)

Logstash - Substring from CSV column

I want to import many informations from a CSV file to Elastic Search.
My issue is I don't how can I use a equivalent of substring to select information into a CSV column.
In my case I have a field date (YYYYMMDD) and I want to have (YYYY-MM-DD).
I use filter, mutate, gsub like:
filter
{
mutate
{
gsub => ["date", "[0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789]", "[0123456789][0123456789][0123456789][0123456789]-[0123456789][0123456789]-[0123456789][0123456789]"]
}
}
But my result is false.
I can indentified my string but I don't how can I extract part of this.
My target it's to have something like:
gsub => ["date", "[0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789]","%{date}(0..3}-%{date}(4..5)-%{date}"(6..7)]
%{date}(0..3} : select from the first to the 4 characters of csv columns date
You can use ruby plugin to do conversion. As you say, you will have a date field. So, we can use it directly in ruby
filter {
ruby {
code => "
date = Time.strptime(event['date'],'%Y%m%d')
event['date_new'] = date.strftime('%Y-%m-%d')
"
}
}
The date_new field is the format you want.
First, you can use a regexp range to match a sequence, so rather than [0123456789], you can do [0-9]. If you know there will be 4 numbers, you can do [0-9]{4}.
Second, you want to "capture" parts of your input string and reorder them in the output. For that, you need capture groups:
([0-9]{4})([0-9]{2})([0-9]{2})
where parens define the groups. Then you can reference those on the right side of your gsub:
\1-\2-\3
\1 is the first capture group, etc.
You might also consider getting these three fields when you do the grok{}, and then putting them together again later (perhaps with add_field).