median in django annotation - mysql

Im using MySQL server v5.8 as database, how to find median price in query like this:
price_query = Product.objects \
.filter(price_query) \
.annotate(dt=Trunc('StartDate', frequency)) \
.values('dt') \
.annotate(avg_price=Avg('Price'), std_price=StdDev('Price'),
count=Count('Price'), max_price=Max('Price'),
min_price=Min('Price'), median='???') \
.order_by('dt')
response is look like this
{"date":"2021-05-01T00:00:00Z","avg_price":4326.666666666667,"std_price":20.548046676563168,"min_price":4300.0, "max_price":4350.0,"count":3}, {...}, {...}
Any help is highly appreciated.

Related

Create Azure EventHub via CLI with Capture

Scenario: I am putting together a repeatable script that creates, among other things, an Azure EventHub. My code looks like:
az eventhubs eventhub create \
--name [name] \
--namespace-name [namespace] \
--resource-group [group] \
--status Active \
--enable-capture true \
--archive-name-format "{Namespace}/{EventHub}/{Year}/{Month}/{Day}/{Hour}/{Minute}/{Second}/{PartitionId}" \
--storage-account [account] \
--blob-container [blob] \
--capture-interval 300 \
--partition-count 10 \
--skip-empty-archives true
If I run the code as written, I get a "Required property 'name' not found in JSON. Path 'properties.captureDescription.destination', line 1, position 527."
However, if I remove the --enable-capture true parameter, the EventHub is created, albeit with Capture not enabled. If I enable Capture, none of the capture-related parameters other than the interval are set.
Is there a typo in there that I'm not seeing?
Try providing the --destination-name.
az eventhubs eventhub create --name
--namespace-name
--resource-group
[--archive-name-format]
[--blob-container]
[--capture-interval]
[--capture-size-limit]
[--destination-name]
[--enable-capture {false, true}]
[--message-retention]
[--partition-count]
[--skip-empty-archives {false, true}]
[--status {Active, Disabled, SendDisabled}]
[--storage-account]

JMESPath filter with >1 match ANDING

I saw the ORING post; this should cover ANDING; I struggled with this one.
Given this while loop:
while read -r resourceID resourceName; do
pMsg "Processing: $resourceID with $resourceName"
aws emr describe-cluster --cluster-id="$resourceID" --output table > ${resourceName}.md"
done <<< "$(aws emr list-clusters --active --query='Clusters[].Id' \
--output text | sortExpression)"
I need to feed my loop with the ID AND Name of the clusters. One is easy; two is eluding me. Any help is appreciated.
If your goal is to end up with a output looking like this from list-clusters:
1 ABCD
2 EFGH
In order to feed it to describe-cluster, then you should create a multiselect list.
Something like:
Clusters[].[Id, Name]
This is actually described in the user guide about text output format, where they show that:
'Reservations[*].Instances[*].[Placement.AvailabilityZone, State.Name,
InstanceId]' --output text
Gives
us-west-2a running i-4b41a37c
us-west-2a stopped i-a071c394
us-west-2b stopped i-97a217a0
us-west-2a running i-3045b007
us-west-2a running i-6fc67758
Source: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#text-output
So you should end up with
while read -r resourceID resourceName; do
pMsg "Processing: $resourceID with $resourceName"
aws emr describe-cluster \
--cluster-id="$resourceID" \
--output table > ${resourceName}.md"
done <<< "$(aws emr list-clusters \
--active \
--query='Clusters[].[Id, Name]' \
--output text | sortExpression \
)"

Ignore errors while loading to Mysql from Pyspark

In redshift we can have maxerror Option
df.write.option("url", jdbc_url) \
.option("dbtable", tbl) \
.option("tempdir", tmp_folder_rs) \
.option("aws_iam_role", rs_iam) \
.option("extracopyoptions", "**maxerror as 100000** blanksasnull") \
.mode("append") \
.save()
which gives it a tolerance of some records while loading.
Do we have similar options for MYSQL. My write to mysql is as below
job_status_event_req_cols.write.format("jdbc").option("url", "jdbc:mysql://test.us-west-1.vpce.amazonaws.com/test").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "employee").option("user", "writer").option("password", "test").mode('append').save()

arangoimp of graph from CSV file

I have a network scan in a TSV file that contains data in a form like the following sample
source IP target IP source port target port
192.168.84.3 192.189.42.52 5868 1214
192.168.42.52 192.189.42.19 1214 5968
192.168.4.3 192.189.42.52 60680 22
....
192.189.42.52 192.168.4.3 22 61969
Is there an easy way to import this using arangoimp into the (pre-created) edge collection networkdata?
You could combine the TSV importer, if it wouldn't fail converting the IPs (fixed in ArangoDB 3.0), so you need a bit more conversion logic to get valid CSV. One will use the ede attribute conversion option to convert the first two columns to valid _from and _to attributes during the import.
You shouldn't specify column subjects with blanks in them, and it should really be tabs or a constant number of columns. We need to specify a _from and a _to field in the subject line.
In order to make it work, you would pipe the above through sed to get valid CSV and proper column names like this:
cat /tmp/test.tsv | \
sed -e "s;source IP;_from;g;" \
-e "s;target IP;_to;" \
-e "s; port;Port;g" \
-e 's; *;",";g' \
-e 's;^;";' \
-e 's;$;";' | \
arangoimp --file - \
--type csv \
--from-collection-prefix sourceHosts \
--to-collection-prefix targetHosts \
--collection "ipEdges" \
--create-collection true \
--create-collection-type edge
Sed with these regular expressions will create an intermediate representation looking like that:
"_from","_to","sourcePort","targetPort"
"192.168.84.3","192.189.42.52","5868","1214"
The generated edges will look like that:
{
"_key" : "21056",
"_id" : "ipEdges/21056",
"_from" : "sourceHosts/192.168.84.3",
"_to" : "targetHosts/192.189.42.52",
"_rev" : "21056",
"sourcePort" : "5868",
"targetPort" : "1214"
}

MooTools build hash in 1.2.4.4

We are trying to upgrade our MooTools installation from 1.2.4 to 1.2.6. The original developer included a "more" file with optional plugins, but because it is compressed we can't tell what was included in that file. I'd rather not hunt and pick through the code.
I noticed the compressed more file has a build hash in the header (6f6057dc645fdb7547689183b2311063bd653ddf). The 1.4 builder located here will let you just append that hash to the url and create a build. It doesn't seem the 1.2 version supports that functionality.
Is there an easy way to determine from the hash or the compressed file what plugins are included in this 1.2 build?
AFAIK there's no way to get the list of plugins directly from the build hash. But if you have access to a UNIX shell, save the following shell script as find_plugins.sh:
#!/bin/sh
for PLUGIN in \
More Lang Log Class.Refactor Class.Binds Class.Occlude Chain.Wait \
Array.Extras Date Date.Extras Hash.Extras String.Extras \
String.QueryString URI URI.Relative Element.Forms Elements.From \
Element.Delegation Element.Measure Element.Pin Element.Position \
Element.Shortcuts Form.Request Form.Request.Append Form.Validator \
Form.Validator.Inline Form.Validator.Extras OverText Fx.Elements \
Fx.Accordion Fx.Move Fx.Reveal Fx.Scroll Fx.Slide Fx.SmoothScroll \
Fx.Sort Drag Drag.Move Slider Sortables Request.JSONP Request.Queue \
Request.Periodical Assets Color Group Hash.Cookie IframeShim HtmlTable \
HtmlTable.Zebra HtmlTable.Sort HtmlTable.Select Keyboard Keyboard.Extras \
Mask Scroller Tips Spinner Date.English.US Form.Validator.English \
Date.Catalan Date.Czech Date.Danish Date.Dutch Date.English.GB \
Date.Estonian Date.German Date.German.CH Date.French Date.Italian \
Date.Norwegian Date.Polish Date.Portuguese.BR Date.Russian Date.Spanish \
Date.Swedish Date.Ukrainian Form.Validator.Arabic Form.Validator.Catalan \
Form.Validator.Czech Form.Validator.Chinese Form.Validator.Dutch \
Form.Validator.Estonian Form.Validator.German Form.Validator.German.CH \
Form.Validator.French Form.Validator.Italian Form.Validator.Norwegian \
Form.Validator.Polish Form.Validator.Portuguese \
Form.Validator.Portuguese.BR Form.Validator.Russian \
Form.Validator.Spanish Form.Validator.Swedish Form.Validator.Ukrainian
do
grep -q -F $PLUGIN $1 && echo $PLUGIN
done
Then run it like this passing the filename of your MooTools More file as first argument:
sh find_plugins.sh mootools-more.js
It will print out a list of all plugin names found in the JS code. That should get you started.