ffmpeg presets are:
ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow and placebo
How should my json request look like if i want as encoding speed slow?
My json so far:
{
"query": {
"userid": "79943",
"userkey": "XXX",
"action": "addMedia",
"notify": "aw3somevideo#gmail.com",
"format": [{
"output": "mpeg_dash",
"sizes": "426x240,640x360",
"bitrates": "400k,800k"
}, {
"output": "ipad_stream",
"sizes": "426x240,640x360",
"bitrates": "400k,800k"
}]
}
}
After some research and the help with #rogerdpack and the encoding.comsupport i found an answer
It can be set in the video codec parameters:
Slow:
--me umh --subme 8 --ref 5 --b-adapt 2 --direct auto
The JSON structure will look like this:
"video_codec_parameters": {
"coder": "1",
"flags": "+loop",
"flags2": "+bpyramid+wpred+mixed_refs+dct8x8-fastpskip",
"cmp": "+chroma",
"partitions": "+parti8x8+parti4x4+partp8x8+partb8x8",
"me_method": "umh",
"subq": "7",
"me_range": "16",
"bf": "16",
"keyint_min": "25",
"sc_threshold": "40",
"i_qfactor": "0.71",
"b_strategy": "1",
"qcomp": "0.6",
"qmin": "10",
"qmax": "51",
"qdiff": "4",
"directpred": "1",
"level": "30",
"refs": "4",
"psy": "0"
},
Related
I have a longtext field with data stored in JSON format:
[
{
"name": "Internal_use",
"id": "InternalUse_mv_rec",
"fields": [
{
"d_on": "1",
"like": "1",
"length": "150",
"show": "1",
"pattern": "",
"caption": "mv_previous_cr",
"type": "1",
"script": "",
"preview_on": "1",
"search": "1",
"name": "mv_previous_cr",
"options": "",
"btn_caption": "",
"open": "1"
},
{
"d_on": "1",
"like": "1",
"length": "100",
"show": "1",
"pattern": "",
"caption": "mv_previous_pn",
"type": "2",
"script": "",
"preview_on": "1",
"search": "1",
"name": "mv_previous_pn",
"options": "",
"btn_caption": "",
"open": "1"
}
],
"cata_key": "cata_.1568937511545431466",
"open": "1"
}
]
I would like to update "length" to 200 for "name": "mv_previous_cr" using mysql query. How can I do that?
mysql version: 5.7.28
Please note that this data is stored by a third-party app and I cannot modify the data type.
Thanks!
I'm new in the community and I'm not a dev, but I have a task I need to find a solution for. I hope I can get your ideas.
I have a set of JSON files. I want to be able to use jq or a command line that can help me unify the files into one single file.
For example:
File 1, has the following format:
{
"Interactions": [
{
"ID": "ispring://presentations/F7385CB7-DFDC-4D05-90FA-B927DB3D170D/quizzes/",
"Type": "2",
"TimestampUtc": "8/27/2020 12:09:54 PM",
"Timestamp": "8/27/2020 12:09:54 PM",
"Weighting": "",
"Result": "1",
"Latency": "1000",
"Description": "What is the purpose of Summarizing next steps?\n\nSelect the correct box or boxes",
"LearnerResponse": "0_correct_answer[,]1_Rectangle_2",
"ScormActivityId": "12392705",
"InteractionIndex": "3",
"AULMRID": "38093846"
},
]
}
File 2:
{
"Interactions": [
{
"ID": "ispring://presentations/CAA34147-7B48-40C6-84FD-5CE8077DB2BF/quizzes/",
"Type": "2",
"TimestampUtc": "12/8/2020 6:19:12 PM",
"Timestamp": "12/8/2020 6:19:12 PM",
"Weighting": "",
"Result": "1",
"Latency": "1300",
"Description": "'Can't do' language tends to relay this impression...\n\nSelect one.",
"LearnerResponse": "4_All_of_the_above",
"ScormActivityId": "13334358",
"InteractionIndex": "3",
"AULMRID": "40715598"
},
]
}
And this is my expected result in a third file:
{
"Interactions": [
{
"ID": "ispring://presentations/F7385CB7-DFDC-4D05-90FA-B927DB3D170D/quizzes/",
"Type": "2",
"TimestampUtc": "8/27/2020 12:09:54 PM",
"Timestamp": "8/27/2020 12:09:54 PM",
"Weighting": "",
"Result": "1",
"Latency": "1000",
"Description": "What is the purpose of Summarizing next steps?\n\nSelect the correct box or boxes",
"LearnerResponse": "0_correct_answer[,]1_Rectangle_2",
"ScormActivityId": "12392705",
"InteractionIndex": "3",
"AULMRID": "38093846"
},
{
"ID": "ispring://presentations/CAA34147-7B48-40C6-84FD-5CE8077DB2BF/quizzes/",
"Type": "2",
"TimestampUtc": "12/8/2020 6:19:12 PM",
"Timestamp": "12/8/2020 6:19:12 PM",
"Weighting": "",
"Result": "1",
"Latency": "1300",
"Description": "'Can't do' language tends to relay this impression...\n\nSelect one.",
"LearnerResponse": "4_All_of_the_above",
"ScormActivityId": "13334358",
"InteractionIndex": "3",
"AULMRID": "40715598"
},
]
}
Any ideas on how to unify them ?
Thank you!
RG
Try something like the following:
jq -n '{ Interactions: [ inputs.Interactions ] | add }' file1.json file2.json
This assumes that you have made both input files valid JSON by stripping that trailing comma at the end of each object in the Interactions array.
For input file1.json:
{
"Interactions": [
{
"ID": "file1",
"Type": "1",
"Timestamp": "8/27/2020 11:11:11 PM"
}
]
}
and input file2.json:
{
"Interactions": [
{
"ID": "file2",
"Type": "2",
"Timestamp": "8/27/2020 22:22:22 PM"
}
]
}
this results in:
{
"Interactions": [
{
"ID": "file1",
"Type": "1",
"Timestamp": "8/27/2020 11:11:11 PM"
},
{
"ID": "file2",
"Type": "2",
"Timestamp": "8/27/2020 22:22:22 PM"
}
]
}
We are creating a deployment template using Azure resource manager. We have virtually everything set up however the created hostplan does not pick up the correct pricing tier. No matter what values we use it always seems to default to the 'Free' 'F1' pricing plan.
Here is the section
{
"apiVersion": "2015-08-01",
"type": "Microsoft.Web/serverfarms",
"name": "[variables('sitePlanName')]",
"location": "[resourceGroup().location]",
"sku": {
"name": "B1",
"tier": "Basic",
"capacity": 1
},
"properties": {
"numberOfWorkers": 1
}
},
Any thoughts would be much appreciated.
Regards
Can you try to specify the SKU inside the "properties" node of the serverfarms description, something like that :
{
"apiVersion":"2015-08-01",
"name":"[variables('hostingPlanName')]",
"type":"Microsoft.Web/serverfarms",
"location":"[resourceGroup().location]",
"properties":{
"name":"[variables('hostingPlanName')]",
"sku":"Basic",
"workerSize":"1"
"numberOfWorkers":1
}
}
Possible values for "sku" are : Free, Shared, Basic, Standard, Premium
For Basic, Standard and Premium SKUs, the "workerSize" possible values could be 0 (small), 1 (medium) or 2 (large) :
"sku": {
"type": "string",
"allowedValues": [
"Free",
"Shared",
"Basic",
"Standard",
"Premium"
],
"defaultValue": "Free"
},
"workerSize": {
"type": "string",
"allowedValues": [
"0",
"1",
"2"
],
"defaultValue": "0"
}
}
Hope this helps.
Julien
Try with this resource block (works for me) to create S1 instance:
{
"apiVersion": "2015-08-01",
"type": "Microsoft.Web/serverfarms",
"name": "[parameters('hostingPlanName')]",
"location": "[resourceGroup().location]",
"properties": {
"name": "[parameters('hostingPlanName')]",
"workerSize": "0",
"numberOfWorkers": 1
},
"sku": {
"name": "S1",
"tier": "Standard",
"size": "S1",
"family": "S",
"capacity": "1"
}
}
For basic Tier use this sku:
"sku": {
"name": "B1",
"tier": "Basic",
"size": "B1",
"family": "B",
"capacity": 1
}
I want to index & search nested json in solr. Here is my json code
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
When I try to Index, I'm getting the error "Error parsing JSON field value. Unexpected OBJECT_START"
When we tried to use Multivalued Field & index, we couldn't able to search using the multivalued field? Its returning "Undefined Field"
Also Please advice if I need to do any changes in schema.xml file?
You are nesting child documents within your document. You need to use the proper syntax for nested child documents in JSON:
[
{
"id": "1",
"title": "Solr adds block join support",
"content_type": "parentDocument",
"_childDocuments_": [
{
"id": "2",
"comments": "SolrCloud supports it too!"
}
]
},
{
"id": "3",
"title": "Lucene and Solr 4.5 is out",
"content_type": "parentDocument",
"_childDocuments_": [
{
"id": "4",
"comments": "Lots of new features"
}
]
}
]
Have a look at this article which describes JSON child documents and block joins.
Using the format mentioned by #qux you will face "Expected: OBJECT_START but got ARRAY_START at [16]",
"code": 400
as when JSON starting with [....] will parsed as a JSON array
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
The above format is correct.
Regarding searching. Kindly use the index to search for the elements of the JSON array.
The workaround for this can be keeping the whole JSON object inside other JSON object and the indexing it
I was suggesting to keep the whole data inside another JSON object. You can try the following way
{
"data": [
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
]
}
see the syntax in http://yonik.com/solr-nested-objects/
$ curl http://localhost:8983/solr/demo/update?commitWithin=3000 -d '
[
{id : book1, type_s:book, title_t : "The Way of Kings", author_s : "Brandon Sanderson",
cat_s:fantasy, pubyear_i:2010, publisher_s:Tor,
_childDocuments_ : [
{ id: book1_c1, type_s:review, review_dt:"2015-01-03T14:30:00Z",
stars_i:5, author_s:yonik,
comment_t:"A great start to what looks like an epic series!"
}
,
{ id: book1_c2, type_s:review, review_dt:"2014-03-15T12:00:00Z",
stars_i:3, author_s:dan,
comment_t:"This book was too long."
}
]
}
]'
supported from solr 5.3
Suggest a best way to read the json and insert it into mysql db.
Example JSON:
{
"PaymentOrder": [
{
"$": {
"Id": "83C44C4A-7EFD-491E-913E-007A3598618F",
"Name": "",
"OrderNumber": "AH9NZPUI",
"OrderType": "Recurring",
"Amount": "1.000000",
"CouponDiscountAmount": ".000000",
"ChargeAmount": ".000000",
"CurrencyCode": "USD",
"OwnerId": "D9F48450-AB86-4ABE-B9AD-66FE3B7AEFB1",
"OrderStatus": "Pending",
"PaymentProcessor": "Global Collect",
"PaymentMethodType": "Visa",
"SoftDescriptorText": "CNext_AH9NZPUI",
"UserPaymentInfoId": "FDD9C487-6F66-4919-B945-766699C9EF1D",
"IsAuthorization": "1",
"CreateTime": "2012-07-18T10:21:47.7610377",
"UpdateTime": "2012-07-18T10:21:47.7610377",
"IsEnabled": "1",
"IsDeleted": "0"
}
},
{
"$": {
"Id": "03B1600F-F92A-47BA-8B53-00B70A942E43",
"Name": "",
"OrderNumber": "3427BBC3",
"OrderType": "Recurring",
"Amount": ".000000",
"CouponDiscountAmount": ".000000",
"ChargeAmount": ".000000",
"CurrencyCode": "USD",
"OwnerId": "9A1186E4-BAC6-43D0-8C6F-B02ECD6544B5",
"OrderStatus": "Success",
"PaymentProcessor": "Global Collect",
"PaymentMethodType": "Visa",
"SoftDescriptorText": "ComputeNext Inc Order Number - 3427BBC3",
"UserPaymentInfoId": "142C6B38-A470-40A9-B7A7-1C8CDD6FBD2C",
"BillingHistoryId": "6BE8DF8E-4C97-492B-9407-38EBA2B942F2",
"InvoiceDate": "2013-03-26T14:02:07.8570464",
"IsAuthorization": "0",
"CreateTime": "2013-03-26T18:08:19.5383118",
"UpdateTime": "2013-03-26T18:08:19.5695118",
"IsEnabled": "1",
"IsDeleted": "0"
}
},
{
"$": {
"Id": "8FC1D8EA-C388-4F31-91C7-00CB54C337A4",
"Name": "",
"OrderNumber": "YTM8TOKQ",
"OrderType": "Recurring",
"Amount": "1.000000",
"CouponDiscountAmount": ".000000",
"ChargeAmount": ".000000",
"CurrencyCode": "USD",
"OwnerId": "3702AB47-1095-4211-B844-82C47D0D9F6C",
"OrderStatus": "Failure",
"PaymentProcessor": "Global Collect",
"PaymentMethodType": "Visa",
"SoftDescriptorText": "CNext_YTM8TOKQ",
"UserPaymentInfoId": "19193F72-14D1-4A72-9676-2493002711ED",
"TransactionDuration": "0:0:40:809",
"LastTransactionNumber": "HC0F4CBYMT14QP3",
"IsAuthorization": "1",
"CreateTime": "2012-07-26T21:15:25.8004068",
"UpdateTime": "2012-07-26T21:16:06.6100785",
"IsEnabled": "1",
"IsDeleted": "0"
}
}
]
}
The attributes present in JSON is equivalent to columns in mysql.
You can code this in the language of your choice of course (you just need a JSON reading facility and a MySQL client but these are readily available in all languages these days) or you could use an ETL tool.
Advantage of using ETL
Changes to source or destination formats (eg all of a sudden you need to read these data from a CSV file or an XML file) or content (fields are added, or disappear, or are renamed) can typically be resolved by reconfiguring the tool.
Disadvantage of using ETL
It's yet another (rather heavy) piece of infrastructure you'll have to manage, and some will argue that using an ETL tool for such a task is like killing a fly with a sledgehammer.
Nevertheless, depending on your requirements regarding flexibility ETL could be the best/easiest solution.