I have mediaconvert jobs encoding mp3 uploads into various formats. I'd like to also create a 30 second "preview" of an mp3 file by trimming the file to start at 10 seconds and end at 40 seconds.
I have tried setting "input clippings" by adding timecode references as below, but it seems to get ignored completely and encodes the whole file. Perhaps this is because mp3 files don't strictly have Timecode? These settings are in my Input json (PHP SDK):
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"Offset": 0,
"DefaultSelection": "DEFAULT",
"SelectorType": "TRACK",
"ProgramSelection": 1
}
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "EMBEDDED",
"FileInput": "'$file'",
"InputClippings": [
{
"EndTimecode": "00:00:45:00",
"StartTimecode": "00:00:20:00"
}
]
}
]
I have also tried adding the inputclipping in this format :
"inputs": [
{
"inputClippings": [
{
"endTimecode": "00:00:40:00",
"startTimecode": "00:00:10:00"
}
],
"audioSelectors": {
},
I think this parameter is case sensitive. I do input clipping in MediaConvert occasionally and it works for me. Maybe try this:
"Inputs": [
{
"InputClippings": [
{
"EndTimecode": "00:00:40:00",
"StartTimecode": "00:00:10:00"
}
],
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT",
"ProgramSelection": 1
}
},
"FileInput": "s3://my-bucket/abc.mp4"
}
]
The InputClippings feature is not currently supported for audio-only inputs. MediaConvert silently ignores this parameter rather than returning a warning or error for audio-only inputs.
Resource: https://docs.aws.amazon.com/mediaconvert/latest/ug/feature-limitations-for-audio-only.html
Related
I'm new to couchbase. I want to update channels for some of the documents through sync functions. But right now, it is not updating but adding an extra channel to the document's meta but not removing the existing channel. Can anyone suggest how I can remove the existing channel in the document?
Sync Function:
function(doc, oldDoc) {
//....
if (doc.docType === "differentType") {
channel("differentChannel");
expiry(2332800);
return;
}
//.......
}
Document:
{
"channels": [
"abcd"
],
"docType": "differentType",
"_id" : "asjnc"
}
Metadata:
{
"meta": {
"id": "asjnc",
"rev": "64-1b500000000",
"expiration": 1650383285,
"flags": 0,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "1-db30e607872",
"sequence": 777,
"recent_sequences": [
777
],
"history": {
"revs": [
"1-db30e607872"
],
"parents": [
-1
],
"channels": [
[
"differentChannel"
]
]
},
"channels": {
"differentChannel": null
}
}
}
}
Expectation of the document with the same metadata:
{
"channels": [ ], // <--- no channels
"docType": "differentType",
"_id" : "asjnc"
}
With this sync function, for the document of type differentType, the channel differentChannel is set in the xattrs section in the metadata. But the channel that was added earlier from the couchbaseLite is not getting removed. Can anyone help?
I answered this in the Couchbase Forums: https://forums.couchbase.com/t/remove-channels-from-a-document/33212
The "channels" property in a document is counter-intuitively not describing what channels the document is currently in - it's just a user-definable field that happens to be the default routing for channels if you don't specify a sync function. It's up to the writer of the document what it should contain.
If you have another means of channel assignments (like "docType" in your case), then you don't need to specify "channels" in the document. The sync metadata shows that the document is in "differentChannel" at revision 1-db30e607872 but the contents of the document can be arbitrary.
I'm using serilog with this configuration:
{
"Serilog": {
"Using": [ "Serilog.Sinks.Console", "Serilog.Sinks.File" ],
"MinimumLevel": "Debug",
"Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "./logs/performance-{Date}.log",
"rollingInterval": "Day",
"fileSizeLimitBytes": 1000,
"rollOnFileSizeLimit": true,
"retainedFileCountLimit": null,
"shared": true
}
}
]
}
}
Output file should look like 20210613-performance.log But output file looks like {Date}-performance20210613.log.
What i'm doing wrong?
The {Date} placeholder is not a feature of the Serilog.Sinks.File sink that you're using. You're probably confusing with the (deprecated) Serilog.Sinks.RollingFile sink which has this feature.
With Serilog.Sinks.File, at this time, you cannot define where the date will appear. It's always appended to the end of the file name you choose (and before the sequence number if you also are rolling by file size).
There have been attempts to implement this feature, but as of this writing it's not yet there.
Is it possible to only allow content creators to add 1 or 2 elements in a repeatable group? I am looking for something like this:
"content_teasers" : {
"type" : "Slice",
"fieldset" : "Content Teasers",
"description" : "One or two teasers with Image, Title, Text and an optional link",
"repeat": 2,
"repeat" : {
"image" : {
"type" : "Image",
"config" : {
[...]
where "repeat": 2 sets the number of allowed elements.
No, it's not possible,
The way to do it today would be to add the fields in the non-repeatable section of a slice.
I'm part of Prismic's team so I just logged this as a feature request for the dev team!
Is it because you have strict design rules of having X number of components? or you don't want to handle the display of too many cases (if they put 1, 2, or 10 items?)
I've recently run into a similar problem but found a hacky solution that's been working for me (still hoping something is officially added to Prismic). If you add the number of items you want to your page before you add "repeat": false to your JSON config, it will keep those items but remove the ability to add more.
For anyone who comes across this, here's a snippet from one of my custom types as an example:
{
"Main": {
"title": {
"type": "Text",
"config": {
"label": "Title"
}
},
"uid": {
"type": "UID",
"config": {
"label": "uid"
}
}
},
"Hero": {
"hero_images": {
"type": "Group",
"config": {
"repeat": false,
"fields": {
"image": {...}
}
}
}
}
}
I have the following JSON data
{
"results": [
{
"alternatives": [
{
"confidence": 0.6,
"transcript": "state radio "
}
],
"final": true
},
{
"alternatives": [
{
"confidence": 0.77,
"transcript": "tomorrow I'm headed to mine nine
consecutive big con I'm finna old tomorrow I've got may meet and greet
with whoever's dumb enough to line up "
}
],
"final": true
If I try data["results"], it works and I get everything inside "results".
But if I try data["alternatives"], it doesn't work.
I want to get the text in "transcript", how can I get that?
"transcripts" is not a direct child of data. It is, instead, the child of element "alternatives", which is a child of each element of the list "results", which is, in turn, the direct child of data. So, to get your contents of transcript as a list, do:
transcripts = [r["alternatives"]["transcript"] for r in data["results"]]
To access alternatives,
data['results'][0]['alternatives']['transcript]
change the index 0,1,2,3 ... according to which transcript data you need to extract.
You can get the expected result by using the code:
import json
d='''
{
"results": [
{
"alternatives": [
{
"confidence": 0.6,
"transcript": "state radio "
}
],
"final": true
},
{
"alternatives": [
{
"confidence": 0.77,
"transcript": "tomorrow I'm headed to mine nine consecutive big con I'm finna old tomorrow I've got may meet and greet with whoever's dumb enough to line up "
}
],
"final": true
}
]}
'''
data = json.loads(d)
for i in range(len(data['results'])):
transcript=data['results'][i]['alternatives'][0]['transcript']
print(transcript)
We have a heavily nested json document containing server metrcs, the document contains > 1000 fields some of which are completely irrelevant to us for analytic purposes so i would like to remove them before indexing the document in Elastic.
However i am unable to find the correct filter to use as the fields i want to remove have common names in multiple different objects within the document.
The source document looks like this ( reduced in size for brevity)
[
{
"server": {
"is_master": true,
"name": "MYServer",
"id": 2111
},
"metrics": {
"Server": {
"time": {
"boundary": {},
"type": "TEXT",
"display_name": "Time",
"value": "2018-11-01 14:57:52"
}
},
"Mem_OldGen": {
"used": {
"boundary": {},
"display_name": "Used(mb)",
"value": 687
},
"committed": {
"boundary": {},
"display_name": "Committed(mb)",
"value": 7116
}
"cpu_count": {
"boundary": {},
"display_name": "Cores",
"value": 4
}
}
}
}
]
The data is loaded into logstash using the http_poller input plugin and needs to be processed before sending to Elastic for indexing.
I am trying to remove the fields that are not relevant for us to track for analytical purposes, these include the "display_name" and "boundary" fields from each json object in the different metrics.
I have tried using the mutate filter to remove the fields but because they exist in so many different objects it requires to many coded paths to be added to the logstash config.
I have also looked at the ruby filter, which seems promising as it can look the event, but i am unable to get it to crawl the entire json document, or more importantly actually remove the fields.
Here is what i was trying as a test
filter {
split{
field => "message"
}
ruby {
code => '
event.get("[metrics][Mem_OldGen][used]").to_hash.keys.each { |k|
logger.info("field is:", k)
if k.include?("display_name")
event.remove(k)
end
if k.include?("boundary")
event.remove(k)
end
}
'
}
}
It first splits the input at the message level to create one event per server, then tries to remove the fields from a specific metric.
Any help you be greatly appreciated.
If I get the point, you want to keep just the value key.
So, considering the response hash:
response = {
"server": {
"is_master": true,
"name": "MYServer",
"id": 2111
},
"metrics": {
...
You could do:
response[:metrics].transform_values { |hh| hh.transform_values { |h| h.delete_if { |k,v| k != :value } } }
#=> {:server=>{:is_master=>true, :name=>"MYServer", :id=>2111}, :metrics=>{:Server=>{:time=>{:value=>"2018-11-01 14:57:52"}}, :Mem_OldGen=>{:used=>{:value=>687}, :committed=>{:value=>7116}, :cpu_count=>{:value=>4}}}}