Retrieving media upload in controller from array dynamically - json

I am using Laravel 5.1 and I am using Illuminate\Http\Request instance for accessing sent file requests, but I can't seem to find the correct way to access the media element. I can access all other values in the object.
Here is the JSON data format I am using:
{"_method":"PUT",
"topic":"1 test",
"description":"1 test",
"media_description":"1 test",
"old_parts":
{"part-1":
{"sub_header":"test 2",
"text_field":"test 2 ",
"article_id":"18",
"media":{}},
"part-2":
{"sub_header":"test 3 ",
"text_field":"test 3",
"article_id":"18",
"media":{}}
},
"published":"1",
"media":{}
}
I access media in controller using two foreach loops, one loop is for database collection and other is for form content. I can use following method for accessing media elements.
if ($file = $request->file('media-' . $running_number)) {...}
But it decreases readability by huge amount as I should name the fields f.e. as media-1, media-2, ... and not access them using foreach loop like I do with everything else.
So is there a way to access file with foreach loop variable?
f.e. something like this
foreach($input['old_parts'] as $old_part) {
if ($old_part->hasFile('media')) { ... }
}
If I use the lines I typed the function returns a FatalThrowableError : Call to a member function hasFile() on array.
Or should I just use something like
$request->file('old_parts[part-'.$x.'][media]');
To access the file input and use custom variable to count the index? The problem is just that I can't figure out how to go inside multiple levels in JSON using file, as either old_parts['..']['..'] and old_parts['part-1']->media don't seem to return anything (return null).

Related

Jmeter combine Json extractor variables to pass into request

There's a certain web request which has the following response:
{
"data": {
"articles": [
{
"id": "1355",
"slug": "smart-device-connectivity's-impact-on-homes-workplaces",
"title": "Smart device connectivity's impact on homes, workplaces",
"published_at": "2022-01-28T21:30:00.000Z",
"avg_rating": 0,
"click_count": 60,
},
{
"id": "1363",
"slug": "you-need-to-nurture-and-amplify-human-capabilities",
"title": "You need to nurture and amplify human capabilities",
"published_at": "2022-01-28T19:00:00.000Z",
"avg_rating": 0,
"click_count": 22,
}]}}
There are a total of 702 records which may increase or decrease over the coming months. Now I have been successfully able to extract ID & slug into separate variables. My aim is to pass these two variables into another request in the following format so that I can eventually run that 702 times or number of times = ID array or slug array size:
testurl.com/insight/${id}/${slug}
Example:
testurl.com/insight/1355/smart-device-connectivity's-impact-on-homes-workplaces
testurl.com/insight/1363/you-need-to-nurture-and-amplify-human-capabilities
I made use of Foreach controller & was able to pass slug but ID does not work. Does anyone know the solution?
If you're using ForEach Controller for iterating slug variable the id one needs to be handed a little bit differently:
use __jm__ForEach Controller__idx pre-defined variable to get current iteration of the ForEach Controller
use __intSum() function to increment it by 1 as the above variable is zero-based
use __V() function to calculate the value of id_x variable
putting everything together:
testurl.com/insight/${__V(id_${__intSum(${__jm__ForEach Controller__idx},1,)},)}/${slug}
What error do you get?
I was able to emulate the same
I saved your json in a variable
Foreach controller
another JSON inside foreach
Using the extracted values
Overall JMX structure
Final output

How can I create an EMR cluster resource that uses spot instances without hardcoding the bid_price variable?

I'm using Terraform to create an AWS EMR cluster that uses spot instances as core instances.
I know I can use the bid_price variable within the core_instance_group block on a aws_emr_cluster resource, but I don't want to hardcode prices as I'd have to change them manually every time the instance type changes.
Using the AWS Web UI, I'm able to choose the "Use on-demand as max price" option. That's exactly what I'm trying to reproduce, but in Terraform.
Right now I am trying to solve my problem using the aws_pricing_product data source. You can see what I have so far below:
data "aws_pricing_product" "m4_large_price" {
service_code = "AmazonEC2"
filters {
field = "instanceType"
value = "m4.large"
}
filters {
field = "operatingSystem"
value = "Linux"
}
filters {
field = "tenancy"
value = "Shared"
}
filters {
field = "usagetype"
value = "BoxUsage:m4.large"
}
filters {
field = "preInstalledSw"
value = "NA"
}
filters {
field = "location"
value = "US East (N. Virginia)"
}
}
data.aws_pricing_product.m4_large_price.result returns a json containing the details of a single product (you can check the response of the example here). The actual on-demand price is buried somewhere inside this json, but I don't know how can I get it (image generated with http://jsonviewer.stack.hu/):
I know I might be able solve this by using an external data source and piping the output of an aws cli call to something like jq, e.g:
aws pricing get-products --filters "Type=TERM_MATCH,Field=sku,Value=8VCNEHQMSCQS4P39" --format-version aws_v1 --service-code AmazonEC2 | jq [........]
But I'd like to know if there is any way to accomplish what I'm trying to do with pure Terraform. Thanks in advance!
Unfortunately the aws_pricing_product data source docs don't expand on how it should be used effectively but the discussion in the pull request that added it adds some insight.
In Terraform 0.12 you should be able to use the jsondecode function to nicely get at what you want with the following given as an example in the linked pull request:
data "aws_pricing_product" "example" {
service_code = "AmazonRedshift"
filters = [
{
field = "instanceType"
value = "ds1.xlarge"
},
{
field = "location"
value = "US East (N. Virginia)"
},
]
}
# Potential Terraform 0.12 syntax - may change during implementation
# Also, not sure about the exact attribute reference architecture myself :)
output "example" {
values = jsondecode(data.json_query.example.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD
}
If you are stuck on Terraform <0.12 you might struggle to do this natively in Terraform other than the external data source approach you've already suggested.
#cfelipe put that ${jsondecode(data.aws_pricing_product.m4_large_price.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD}" in a Locals

is there a way to write u-sql queries without using EXTRACT

I have a metadata activity output which is a json of blobs in my container. I want to input these names into my foreach activity where some u-sql query is performed on the blob as per the file name. Is it possible?
You need to include either a SELECT or an EXTRACT. Since you are pulling from files, you are going to want to use EXTRACT.
If I understand your question correctly, you want to run different U-SQL scripts based on the file name.
There are a couple ways to do this:
1) use If conditions in Data Factory to call different U-SQL scripts based on the file name. Nesting the if statements will allow you to have more than two options. There are several string manipulation functions to help you with this. Say one path is #item.Contains('a').
{
"name": "<Name of the activity>",
"type": "IfCondition",
"typeProperties": {
"expression": {
"value": "#item() == <file name>",
"type": "Expression"
}
"ifTrueActivities": [
{
"<U-SQL script = 1>"
}
],
"ifFalseActivities": [
{
"<U-SQL script 2>"
}
]
}
}
2) The second option is to use a single U-SQL script and do the split from there. Again, string manipulation functions can help via pattern matching. There is some advantage to this as far as organization goes as you can store the unique scripts in stored procedures and the U-SQL script would simply check the file name passed in and call the relevant stored proc.
//This would be added by data factory
DECLARE #fileName = "/Samples/Data/SearchLog.tsv";
IF #fileName == "/Samples/Data/SearchLog.tsv"
THEN
#searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int?,
Urls string,
ClickedUrls string
FROM "/Samples/Data/SearchLog.tsv"
USING Extractors.Tsv();
OUTPUT #searchlog
TO #fileName
USING Outputters.Csv();
ELSE
#searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int?,
Urls string,
ClickedUrls string
FROM #fileName
USING Extractors.Tsv();
OUTPUT #searchlog
TO "/output/SearchLogResult1.csv"
USING Outputters.Csv();
END;
Something to think about is that Data Lake Analytics is going to be more efficient if you can combine multiple files into one statement. You can have multiple EXTRACT and OUTPUT statements. I would encourage you to explore whether or not you could use pattern matching in your EXTRACT statements to split the U-SQL processing without needing the foreach loop in data factory.

Convert json to array using Perl

I have a chunk of json that has the following format:
{"page":{"size":7,"number":1,"totalPages":1,"totalElements":7,"resultSetId":null,"duration":0},"content":[{"id":"787edc99-e94f-4132-b596-d04fc56596f9","name":"Verification","attributes":{"ruleExecutionClass":"VerificationRule"},"userTags":[],"links":[{"rel":"self","href":"/endpoint/787edc99-e94f-4132-b596-d04fc56596f9","id":"787edc99-e94f-...
Basically the size attribute (in this case) tells me that there are 7 parts to the content section. How do I convert this chunk of json to an array in Perl, and can I do it using the size attribute? Or is there a simpler way like just using decode_json()?
Here is what I have so far:
my $resources = get_that_json_chunk(); # function returns exactly the json you see, except all 7 resources in the content section
my #decoded_json = #$resources;
foreach my $resource (#decoded_json) {
I've also tried something like this:
my $deserialize = from_json( $resources );
my #decoded_json = (#{$deserialize});
I want to iterate over the array and handle the data. I've tried a few different ways because I read a little about array refs, but I keep getting "Not an ARRAY reference" errors and "Can't use string ("{"page":{"size":7,"number":1,"to"...) as an ARRAY ref while "strict refs" in use"
Thank you to Matt Jacob:
my $deserialized = decode_json($resources);
print "$_->{id}\n" for #{$deserialized->{content}};

MongoDB - Dynamically update an object in nested array

I have a document like this:
{
Name : val
AnArray : [
{
Time : SomeTime
},
{
Time : AnotherTime
}
...arbitrary more elements
}
I need to update "Time" to a Date type (right now it is string)
I would like to do something psudo like:
foreach record in document.AnArray { record.Time = new Date(record.Time) }
I've read the documentation on $ and "dot" notation as well as a several similar questions here, I tried this code:
db.collection.update({_id:doc._id},{$set : {AnArray.$.Time : new Date(AnArray.$.Time)}});
And hoping that $ would iterate the indexes of the "AnArray" property as I don't know for each record the length of it. But am getting the error:
SyntaxError: missing : after property id (shell):1
How can I perform an update on each member of the arrays nested values with a dynamic value?
There's no direct way to do that, because MongoDB doesn't support an update-expression that references the document. Moreover, the $ operator only applies to the first match, so you'd have to perform this as long as there are still fields where AnArray.Time is of $type string.
You can, however, perform that update client side, in your favorite language or in the mongo console using JavaScript:
db.collection.find({}).forEach(function (doc) {
for(var i in doc.AnArray)
{
doc.AnArray[i].Time = new Date(doc.AnArray[i].Time);
}
db.outcollection.save(doc);
})
Note that this will store the migrated data in a different collection. You can also update the collection in-place by replacing outcollection with collection.