Create Azure EventHub via CLI with Capture - azure-cli

Scenario: I am putting together a repeatable script that creates, among other things, an Azure EventHub. My code looks like:
az eventhubs eventhub create \
--name [name] \
--namespace-name [namespace] \
--resource-group [group] \
--status Active \
--enable-capture true \
--archive-name-format "{Namespace}/{EventHub}/{Year}/{Month}/{Day}/{Hour}/{Minute}/{Second}/{PartitionId}" \
--storage-account [account] \
--blob-container [blob] \
--capture-interval 300 \
--partition-count 10 \
--skip-empty-archives true
If I run the code as written, I get a "Required property 'name' not found in JSON. Path 'properties.captureDescription.destination', line 1, position 527."
However, if I remove the --enable-capture true parameter, the EventHub is created, albeit with Capture not enabled. If I enable Capture, none of the capture-related parameters other than the interval are set.
Is there a typo in there that I'm not seeing?

Try providing the --destination-name.
az eventhubs eventhub create --name
--namespace-name
--resource-group
[--archive-name-format]
[--blob-container]
[--capture-interval]
[--capture-size-limit]
[--destination-name]
[--enable-capture {false, true}]
[--message-retention]
[--partition-count]
[--skip-empty-archives {false, true}]
[--status {Active, Disabled, SendDisabled}]
[--storage-account]

Related

JMESPath filter with >1 match ANDING

I saw the ORING post; this should cover ANDING; I struggled with this one.
Given this while loop:
while read -r resourceID resourceName; do
pMsg "Processing: $resourceID with $resourceName"
aws emr describe-cluster --cluster-id="$resourceID" --output table > ${resourceName}.md"
done <<< "$(aws emr list-clusters --active --query='Clusters[].Id' \
--output text | sortExpression)"
I need to feed my loop with the ID AND Name of the clusters. One is easy; two is eluding me. Any help is appreciated.
If your goal is to end up with a output looking like this from list-clusters:
1 ABCD
2 EFGH
In order to feed it to describe-cluster, then you should create a multiselect list.
Something like:
Clusters[].[Id, Name]
This is actually described in the user guide about text output format, where they show that:
'Reservations[*].Instances[*].[Placement.AvailabilityZone, State.Name,
InstanceId]' --output text
Gives
us-west-2a running i-4b41a37c
us-west-2a stopped i-a071c394
us-west-2b stopped i-97a217a0
us-west-2a running i-3045b007
us-west-2a running i-6fc67758
Source: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#text-output
So you should end up with
while read -r resourceID resourceName; do
pMsg "Processing: $resourceID with $resourceName"
aws emr describe-cluster \
--cluster-id="$resourceID" \
--output table > ${resourceName}.md"
done <<< "$(aws emr list-clusters \
--active \
--query='Clusters[].[Id, Name]' \
--output text | sortExpression \
)"

How to insert variable in fswatch regex?

I'm trying to use a variable to identify mxf or mov file extensions. The following works where I explicitly name the file extensions with a regular expression.
${FSWATCH_PATH} -0 \
-e ".*" --include ".*\.[ mxf|mov ]" \
--event Updated --event Renamed --event MovedTo -l $LATENCY \
$LOCAL_WATCHFOLDER_PATH \
| while read -d "" event
do
<code here>
done
How can I use a variable for the file extensions, where the variable name is FileTriggerExtensions? The code below doesn't work:
FileTriggerExtensions=mov|mxf
${FSWATCH_PATH} -0 \
-e ".*" --include ".*\.[ $FileTriggerExtensions ]" \
--event Updated --event Renamed --event MovedTo -l $LATENCY \
$LOCAL_WATCHFOLDER_PATH \
| while read -d "" event
do
done
I guess you use Bash or a similar shell?
FileTriggerExtensions=mov|mxf
-bash: mxf: command not found
Use quotes or escape the pipe symbol.

arangoimp of graph from CSV file

I have a network scan in a TSV file that contains data in a form like the following sample
source IP target IP source port target port
192.168.84.3 192.189.42.52 5868 1214
192.168.42.52 192.189.42.19 1214 5968
192.168.4.3 192.189.42.52 60680 22
....
192.189.42.52 192.168.4.3 22 61969
Is there an easy way to import this using arangoimp into the (pre-created) edge collection networkdata?
You could combine the TSV importer, if it wouldn't fail converting the IPs (fixed in ArangoDB 3.0), so you need a bit more conversion logic to get valid CSV. One will use the ede attribute conversion option to convert the first two columns to valid _from and _to attributes during the import.
You shouldn't specify column subjects with blanks in them, and it should really be tabs or a constant number of columns. We need to specify a _from and a _to field in the subject line.
In order to make it work, you would pipe the above through sed to get valid CSV and proper column names like this:
cat /tmp/test.tsv | \
sed -e "s;source IP;_from;g;" \
-e "s;target IP;_to;" \
-e "s; port;Port;g" \
-e 's; *;",";g' \
-e 's;^;";' \
-e 's;$;";' | \
arangoimp --file - \
--type csv \
--from-collection-prefix sourceHosts \
--to-collection-prefix targetHosts \
--collection "ipEdges" \
--create-collection true \
--create-collection-type edge
Sed with these regular expressions will create an intermediate representation looking like that:
"_from","_to","sourcePort","targetPort"
"192.168.84.3","192.189.42.52","5868","1214"
The generated edges will look like that:
{
"_key" : "21056",
"_id" : "ipEdges/21056",
"_from" : "sourceHosts/192.168.84.3",
"_to" : "targetHosts/192.189.42.52",
"_rev" : "21056",
"sourcePort" : "5868",
"targetPort" : "1214"
}

MooTools build hash in 1.2.4.4

We are trying to upgrade our MooTools installation from 1.2.4 to 1.2.6. The original developer included a "more" file with optional plugins, but because it is compressed we can't tell what was included in that file. I'd rather not hunt and pick through the code.
I noticed the compressed more file has a build hash in the header (6f6057dc645fdb7547689183b2311063bd653ddf). The 1.4 builder located here will let you just append that hash to the url and create a build. It doesn't seem the 1.2 version supports that functionality.
Is there an easy way to determine from the hash or the compressed file what plugins are included in this 1.2 build?
AFAIK there's no way to get the list of plugins directly from the build hash. But if you have access to a UNIX shell, save the following shell script as find_plugins.sh:
#!/bin/sh
for PLUGIN in \
More Lang Log Class.Refactor Class.Binds Class.Occlude Chain.Wait \
Array.Extras Date Date.Extras Hash.Extras String.Extras \
String.QueryString URI URI.Relative Element.Forms Elements.From \
Element.Delegation Element.Measure Element.Pin Element.Position \
Element.Shortcuts Form.Request Form.Request.Append Form.Validator \
Form.Validator.Inline Form.Validator.Extras OverText Fx.Elements \
Fx.Accordion Fx.Move Fx.Reveal Fx.Scroll Fx.Slide Fx.SmoothScroll \
Fx.Sort Drag Drag.Move Slider Sortables Request.JSONP Request.Queue \
Request.Periodical Assets Color Group Hash.Cookie IframeShim HtmlTable \
HtmlTable.Zebra HtmlTable.Sort HtmlTable.Select Keyboard Keyboard.Extras \
Mask Scroller Tips Spinner Date.English.US Form.Validator.English \
Date.Catalan Date.Czech Date.Danish Date.Dutch Date.English.GB \
Date.Estonian Date.German Date.German.CH Date.French Date.Italian \
Date.Norwegian Date.Polish Date.Portuguese.BR Date.Russian Date.Spanish \
Date.Swedish Date.Ukrainian Form.Validator.Arabic Form.Validator.Catalan \
Form.Validator.Czech Form.Validator.Chinese Form.Validator.Dutch \
Form.Validator.Estonian Form.Validator.German Form.Validator.German.CH \
Form.Validator.French Form.Validator.Italian Form.Validator.Norwegian \
Form.Validator.Polish Form.Validator.Portuguese \
Form.Validator.Portuguese.BR Form.Validator.Russian \
Form.Validator.Spanish Form.Validator.Swedish Form.Validator.Ukrainian
do
grep -q -F $PLUGIN $1 && echo $PLUGIN
done
Then run it like this passing the filename of your MooTools More file as first argument:
sh find_plugins.sh mootools-more.js
It will print out a list of all plugin names found in the JS code. That should get you started.

DBIx::Class base result class

I am trying to create a model for Catalyst by using DBIx::Class::Schema::Loader. I want the result classes to have a base class I can add methods to. So MyTable.pm inherits from Base.pm which inherits from DBIx::Class::core (default).
Somehow I cannot figure out how to do this. my create script is below, can anyone tell me what I am doing wrong? The script creates my model ok, but all resultset classes just directly inherit from DBIx::Class::core without my Base class in between.
#!/usr/bin/perl
use DBIx::Class::Schema::Loader qw/ make_schema_at /;
#specifically for the entities many-2-many relation
$ENV{DBIC_OVERWRITE_HELPER_METHODS_OK} = 1;
make_schema_at(
'MyApp::Schema',
{
dump_directory => '/tmp',
debug => 1,
overwrite_modifications => 1,
components => ['EncodedColumn'], #encoded password column
use_namespaces => 1,
default_resultset_class => 'Base'
},
[ 'DBI:mysql:database=mydb;host=localhost;port=3306','rob', '******' ],
);
Looks like you just want to add in result_base_class (and probably drop the default_resultset_class)–
env DBIC_OVERWRITE_HELPER_METHODS_OK=1 \
dbicdump \
-o result_base_class="FullNameOf::Base" \
-o debug=1 \
-o dump_directory=./tmp \
-o components='["EncodedColumn"]' \
-o use_namespaces=1 \
-o overwrite_modifications=1 \
"DBI:mysql:database=mydb;host=localhost;port=3306" \
rob "******"
Update, relevant doc: DBIx::Class::Schema::Loader::Base#result_base_class.