I am searching for an HTML node that contains this text:
(~! # # $ % ^ & * ( ) _ + { } | : " < > ? ` - = [ ] \ ; ‘ , . / ).
But xpath is obviously having problems because it's not recognizing the above as an actual text to search for but rather as part of the xpath expression, which is why I am getting the error:
is not a valid XPath expression.
So how would I convert the above to a text or string to avoid this issue?
Thanks
Related
I am trying to append some Grafana dashboard query things to existing queries. The Select works and doing a simple =+ "TEST" added successfully.
The actual append I have has {} * and ""
") * on(instance, process_id) group_left(name, display_name, run_as) windows_service_info\{display_name=~"$variable",job="$job"\})
so the jq is
jq '. | (.dashboard.panels[].targets[].expr | select(contains("sum((rate(wmi"))) += ") * on(instance, process_id) group_left(name, display_name, run_as) windows_service_info\{display_name=~"$variable",job="$job"\})"'
I tried the string literal
#text {"text":") * on(instance, process_id) group_left(name, display_name, run_as) windows_service_info\{display_name=~"$variable",job="$job"\})"}'
getting errors like:
jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
As json.org says, "any codepoint except " or \ or control characters" can be included within a JSON string without being escaped.
In the question, the given string does contain both " and \, so to convert that string to a valid JSON string, both would need to be escaped.
You need to escape quotation marks inside a quoted string. And the initial .| is redundant:
jq '(.dashboard.panels[].targets[].expr | select(contains("sum((rate(wmi"))) += ") * on(instance, process_id) group_left(name, display_name, run_as) windows_service_info{display_name=~\"$variable\",job=\"$job\"})"'
\{ and \} must not be escaped. Unless you require a literal \{ in your output, then your JQ string must contain \\{ and \\}.
I need to extract Data from a single line of json-data which is inbetween two variables (Powershell)
my Variables:
in front of Data:
DeviceAddresses":[{"Id":
after Data:
,"
I tried this, but there needs to be some error because of all the special characters I'm using:
$devicepattern = {DeviceAddresses":[{"Id":{.*?},"}
#$deviceid = [regex]::match($changeduserdata, $devicepattern).Groups[1].Value
#$deviceid
As you've found, some character literals can't be used as-is in a regex pattern because they carry special meaning - we call these meta-characters.
In order to match the corresponding character literal in an input string, we need to escape it with \ -
to match a literal (, we use the escape sequence \(,
for a literal }, we use \}, and so on...
Fortunately, you don't need to know or remember which ones are meta-characters or escapable sequences - we can use Regex.Escape() to escape all the special character literals in a given pattern string:
$prefix = [regex]::Escape('DeviceAddresses":[{"Id":')
$capture = '(.*?)'
$suffix = [regex]::Escape(',"')
$devicePattern = "${prefix}${capture}${suffix}"
You also don't need to call [regex]::Match directly, PowerShell will populate the automatic $Matches variable with match groups whenever a scalar -match succeeds:
if($changeduserdata -match $devicePattern){
$deviceid = $Matches[1]
} else {
Write-Error 'DeviceID not found'
}
For reference, the following ASCII literals needs to be escaped in .NET's regex grammar:
$ ( ) * + . ? [ \ ^ { |
Additionally, # and (regular space character) needs to be escaped and a number of other whitespace characters have to be translated to their respective escape sequences to make patterns safe for use with the IgnorePatternWhitespace option (this is not applicable to your current scenario):
\u0009 => '\t' # Tab
\u000A => '\n' # Line Feed
\u000C => '\f' # Form Feed
\u000D => '\r' # Carriage Return
... all of which Regex.Escape() takes into account for you :)
To complement Mathias R. Jessen's helpful answer:
Generally, note that JSON data is much easier to work with - and processed more robustly - if you parse it into objects whose properties you can access - see the bottom section.
As for your regex attempt:
Note: The following also applies to all PowerShell-native regex features, such as the -match, -replace, and -split operators, the switch statement, and the Select-String cmdlet.
Mathias' answer uses [regex]::Escape() to escape the parts of the regex pattern to be used verbatim by the regex engine.
This is unequivocally the best approach if those verbatim parts aren't known in advance - e.g., when provided via a variable or expression, or passed as an argument.
However, in a regex pattern that is specified as a string literal it is often easier to individually \-escape the regex metacharacters, i.e. those characters that would otherwise have special meaning to the regex engine.
The list of characters that need escaping is (it can be inferred from the .NET Regular-Expression Quick Reference):
\ ( ) | . * + ? ^ $ [ {
If you enable the IgnorePatternWhiteSpace option (which you can do inline with
(?x), at the start of a pattern), you'll additionally have to \-escape:
#
significant whitespace characters (those you actually want matched) specified verbatim (e.g., ' ', or via string interpolation,"`t"); this does not apply to those specified via escape sequences (e.g., \t or \n).
Therefore, the solution could be simplified to:
# Sample JSON
$changeduserdata = '{"DeviceAddresses":[{"Id": 42,"More": "stuff"}]}'
# Note how [ and { are \-escaped
$deviceId = if ($changeduserdata -match 'DeviceAddresses":\[\{"Id":(.*?),"') {
$Matches[1]
}
Using ConvertFrom-Json to properly parse JSON into objects is both more robust and more convenient, as it allows property access (dot notation) to extract the value of interest:
# Sample JSON
$changeduserdata = '{"DeviceAddresses":[{"Id": 42,"More": "stuff"}]}'
# Convert to an object ([pscustomobject]) and drill down to the property
# of interest; note that the value of .DeviceAddresses is an *array* ([...]).
$deviceId = (ConvertFrom-Json $changeduserdata).DeviceAddresses[0].Id # -> 42
I have a JSON message where I need to remove all format spaces keeping values untouched. This is required before running a hash function over the full payload so it needs to be precise.
I started with the indent=false in the Dataweave writer configuration but I got a space after each colon like this:
{"text": "number\": 1 | array\": [ | number\": 1","number": 1,"array": [1,"as",[],{}]}
Any suggested elegant solution to remove the spaces left before entering in the RegEx world? If not, any RegEx solution?
I have got this solution following #SalimKhan (thanks for that!) suggested post. Basically I just wrote a full JSON custom writer on DataWeave.
fun jsonWrite(item) = item match {
case is Array -> "[" ++ joinBy($ map jsonWrite($), ",") ++ "]"
case is Object -> "{" ++ joinBy($ pluck ("\"" ++ $$ ++ "\":" ++
($ match {
case is String -> "\"" ++ ($ replace "\"" with "\\\"") ++ "\""
case is Object -> jsonWrite($)
case is Array -> "[" ++ joinBy($ map jsonWrite($), ",") ++ "]"
else -> $
})),",") ++ "}"
case is String -> "\"" ++ ($ replace "\"" with "\\\"") ++ "\""
else -> $
}
I tried to remove all space from a json using below dw scripts.
The below one will give json in a stream with no indent, but there will be
space after each colon.
%dw 2.0
output application/json indent=false
---
{
name: "somename",
city: "sg",
profession: "tenchdigger"
}
The output of above script is converted to string and all spaces are removed using below script
%dw 2.0
var someSpaceJson = write(payload, "application/json", {"indent":false})
output application/java
---
someSpaceJson replace " " with ""
The end result is a json string with no space
"{"name":"somename","city":"sg","profession":"tenchdigger"}"
Why this query:
SELECT
"hello" = " hello",
"hello" = "hello ",
"hello" <> "hello ",
"hello" LIKE "hello ",
"hello" LIKE "hello%"
returns me these results:
"hello" = " hello" -> 0
"hello" = "hello " -> 1
"hello" <> "hello " -> 0
"hello" LIKE "hello " -> 0
"hello" LIKE "hello%" -> 1
In particular, I was expecting "hello" = "hello " to be false and "hello" <> "hello " to be true (the LIKE in this case, behaves exactly as I wanted).
Why MySQL compares spaces in such an arbitrary and inconsistent way ? (such as returning 0 on "hello" = " hello" AND 1 on "hello" = "hello ").
Is there any way to configure MySQL to work ALWAYS in "strict mode" (in other words, to make it always behave like LIKE for the varchar/text comparsion) ?
Sadly i'm using a proprietary framework and I cannot force it to always use the LIKE in queries for text comparsions or trimming all the inputs.
"hello" = " hello" -- 0,
Because it is not an exact match
"hello" = "hello " -- 1,
Because trailing spaces are ignored for varchar types
"hello" <> "hello " -- 0,
Because trailing spaces are ignored for varchar types
"hello" LIKE "hello " -- 0,
Because trailing spaces are ignored for varchar types and LIKE performs matching on a per-char basis
"hello" LIKE "hello%" -- 1,
Because it is a partial pattern matching
If you want strict checking, you can use binary on values to be compared.
mysql> select binary('hello')=binary('hello ') bin_eq, 'hello'='hello ' eq;
+--------+----+
| bin_eq | eq |
+--------+----+
| 0 | 1 |
+--------+----+
Refer to:
MySQL: Comparison Functions and Operators
The trailing spaces are omitted if the column is of type char or varchar. See the discussion in the documentation.
All MySQL collations are of type PADSPACE. This means that all CHAR, VARCHAR, and TEXT values in MySQL are compared without regard to any trailing spaces. “Comparison” in this context does not include the LIKE pattern-matching operator, for which trailing spaces are significant.
I tried recode:
echo '>' | recode ascii..html
>
But it only seems to convert characters like > < and ":
echo 'a' | recode ascii..html
a
I want to convert letters and other characters too. I.e, the desired output of the above command is a.
Is there any simple way to do this without creating some big regular expression?
You can use printf to get ascii value of characters using ' in front of the variable. This will of course result in > instead of >. You can use the code bellow to convert $1 to a string of html ascii codes.
str=$1
for (( i=0; i<${#str}; i++ )); do
c=${str:$i:1}
printf "&#%d;" "'$c" #
done
echo ""