I'm trying to update the binlog_format parameter in my Aurora mysql 5.6.10 (Data API enabled) instance to ROW but I'm not able to change it.
I've updated my custom parameter group accordingly but those changes do not reflect on the cluster when I run show variables like 'binlog_format'.
Right after changing the parameter group, the cluster goes into Modifying state but after that finishes the parameter hasn't been updated.
I can't seem to find an option to reboot or stop the cluster on the AWS UI.
Using the CLI, I get this error trying to stop the cluster: An error occurred (InvalidDBClusterStateFault) when calling the StopDBCluster operation: Stop-db-cluster is not supported for these configurations.
Tried changing the capacity settings but that didn't do anything.
Is there any other way I'm missing?
You'll have to check if the specific property modification is supported by serverless engine or not by running this command:
aws rds describe-db-cluster-parameters --db-cluster-parameter-group-name <param-group-name>
If you read the output from above statement, it says 'provisioned' for SupportedEngineModes:
{
"ParameterName": "binlog_format",
"ParameterValue": "OFF",
"Description": "Binary logging format for replication",
"Source": "system",
"ApplyType": "static",
"DataType": "string",
"AllowedValues": "ROW,STATEMENT,MIXED,OFF",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned"
]
}
Ideal state is something like this for a modifiable parameter:
{
"ParameterName": "character_set_server",
"Description": "The server's default character set.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "string",
"AllowedValues": "big5,dec8,cp850,hp8,koi8r,latin1,latin2,swe7,ascii,ujis,sjis,hebrew,tis620,euckr,koi8u,gb2312,greek,cp1250,gbk,latin5,armscii8,utf8,ucs2,cp866,keybcs2,macce,macroman,cp852,latin7,utf8mb4,cp1251,utf16,cp1256,cp1257,utf32,binary,geostd8,cp932,eucjpms",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned",
"serverless"
]
},
Aurora does support Start and Stop APIs now, so I'm surprised that you were not able to use it.
https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-aurora-stop-and-start/
Can you try using them through CLI?
On a separate note, if you just want to reboot the engine for the param change to flow in, you just need to use the reboot-db-instance API.
https://docs.aws.amazon.com/cli/latest/reference/rds/reboot-db-instance.html
Related
How to declare an array of different types in mongoDb schemas?
I have a value in a document which can be a double or an int and I tried declaring it like this:
"numberOf": {
"bsonType": ["long", "int"]
},
And I received that error:
property "numberOf" has invalid type: type [long,int] is not supported
In the doc they say that you can declare an array of bsonTypes or types like that:
"type": "<JSON Type>" | ["<JSON Type>", ...],
"bsonType": "<BSON Type>" | ["<BSON Type>", ...],
I also tried:
"numberOf": {
"type": "number"
},
And I can't save my schema getting this:
I don't know what I missed.
So apparently the problem would come from Sync see here.
They working on it.
Right now, it is not possible to sync multiple types of data for a single field.
What I did is I changed my types to "mixed":
"numberOf": {
"bsonType": "mixed"
},
This feature is in beta (see here) and, probably, you'll have to update your Realm package.
Just do in your terminal
npm install realm
Then
cd ios
pod install
Be careful of breaking changes.
If needed, uninstall your application on your emulator/simulator then
npx react-native run-ios
or
npx react-native run-android
I'm looking to only allow the upload of specific filetypes to Azure Storage to trigger an Azure Function.
Current function.json file:
{
"scriptFile": "__init__.py",
"bindings": [{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "{name}.json",
"connection": "storage-dev"
}]
}
Would I just add another path value like this...
"path": "{name}.json",
"path": "{name}.csv"
...or an array of values like this...
"path": [
"{name}.csv",
"{name}.json"
]
Can't seem to find an example in the docs.
EDIT:
Thank you #BowmanZhu! Your guidance was awesome.
Changed trigger to EventGrid
Actually was able to create a single Advanced Filter rather than create multiple Subscriptions:
You want a blobtrigger to monitor two or more paths at the same time.
I can tell you simply, it's impossible. This is why you can't find the relevant documentation, because there is no such thing. If you must use blobtrigger at the same time according to your requirements, you can only use multiple blobtrigger.
But you have another option: eventgridtrigger:
You just need to create multiple event grid, and let them point to the same endpoint function.
In scanning the docs I cannot find how to update part of a document.
for example - say the whole document looks like this:
{
"Active": true,
"Barcode": "123456789",
"BrandID": "9f3751ef-f14f-464a-bb86-854e99cf14c0",
"BuyCurrencyOverride": ".37",
"BuyDiscountAmount": "45.00",
"ID": "003565a3-4a0d-47d9-befb-0ac642cb8057",
}
but I only want to work with part of the document as I don't want to be selecting / updating the whole document in many cases:
{
"Active": false,
"Barcode": "999999999",
"BrandID": "9f3751ef-f14f-464a-bb86-854e99cf14c0",
"ID": "003565a3-4a0d-47d9-befb-0ac642cb8057",
}
How can I use N1QL to just update those fields? Upsert completely replaces the whole document and update statement is not that clear.
Thanks
The answer to your question depends on why you want to update only part of the document (e.g., are you concerned about network bandwidth?), and how you want to perform the update (e.g., from the web console? from a program using the SDK?).
The 4.5 sub-document API, for which you provided a link in your comment, is a feature only available via the SDK (e.g., from Go or Java programs), and the goal of that feature is to reduce network bandwidth by no transmitting entire documents around. Does your use case include programmatic document modifications via the SDK? If so, then the sub-document API is a good way to go.
Using the "UPDATE" statement in N1QL is a good way to change any number of documents that match a pattern for which you can specify a "WHERE" clause. As noted above, it works very similarly to the "UPDATE" statement in SQL. To use your example above, you could change the "Active" field to false in any documents where the BuyDiscountAmount was "45.00":
UPDATE my bucket SET Active = false WHERE BuyDiscountAmount = "45.00"
When running N1QL UPDATE queries, almost all the network traffic will be between the Query, Index, and Data nodes of your cluster, so a N1QL update does not cause much network traffic into/out-of your cluster.
If you provide more details about your use case, and why you want to update only part of your documents, I could provide more specific advice on the right approach to take.
The sub-doc API introduced in Couchbase4.5 is currently not used by N1QL. However, when you use the UPDATE statement to update parts of one or more documents.
http://developer.couchbase.com/documentation/server/current/n1ql/n1ql-language-reference/update.html
Let me know any Qs.
-Prasad
It is simple like sql query.
update `Employee` set District='SambalPur' where EmpId="1003"
and here is the responce
{
"Employee": {
"Country": "India",
"District": "SambalPur",
"EmpId": "1003",
"EmpName": "shyam",
"Location": "New-Delhi"
}
}
I was led to believe that you can wildcard the filename property in an Azure Blob Table source object.
I want to pick up only certain csv files from blob storage that exist in the same directory as other files I don't want to process:
i.e.
root/data/GUJH-01.csv
root/data/GUJH-02.csv
root/data/DFGT-01.csv
I want to process GUJH*.csv and not DFGT-01.csv
Is this possible? If so, why is my blob source validation failing, informing me that the file does not exist (The message reports that the root/data blob does not exist.
Thanks in advance.
Answering my own question..
There's not a wildcard but there is a 'Starts With' which will work in my scenario:
Instead of root/data/GUJH*.csv I can do root/data/GUJH on the folderPath property and it will bring in all root/data/GUJH files..
:)
Just adding some more detail here because I'm finding this a very difficult learning curve and I'd like to document this for my sake and others.
Given a sample file like this (no extensions in this case) in blob storage,
ZZZZ_20170727_1324
We can see the middle part is in yyyyMMdd format.
This is uploaded to folder Landing inside container MyContainer
this was part of my dataset definition::
"typeProperties": {
"folderPath": "MyContainer/Landing/ZZZZ_{DayCode}",
"format": {
"type": "TextFormat",
"columnDelimiter": "\u0001"
},
"partitionedBy": [
{
"name": "DayCode",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "yyyyMMdd"
}
}
]
},
Note that it's a 'prefix', which you will see in the log / error messages, if you can find them (good luck)
If you want to test loading this particular file you need to press the 'Diagram' button, then drill into your pipeline until you find the target dataset - the one the file is being loaded into (I am loading this into SQL Azure). Click on the target dataset, now go and find the correct timeslice. In my case I need to find the timeslice with a start timeslice of 20170727 and run that one.
This will make sure the correct file is picked up and loaded in to SQL Azure
Forget about manually running pipelines or activities - thats just not how it works. You need to run the output dataset under a timeslice to pull it through.
Is there a way in the json template under AWS::EC2::Instance to specify the number of instances?
you can use auto scaling group with fixed size:
"MyFixedSizeGroup":{
"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":{
"LaunchConfigurationName":{"Ref":"GlobalWorkersSmallLaunchConf"},
"AvailabilityZones" : [ "us-east-1a" ],
"MinSize":"4",
"MaxSize":"4",
"DesiredCapacity":"4",
"Tags":[{"Key":"Name", "Value":"worker instance", "PropagateAtLaunch":"true"}]
}
}
and the desired launch configuration, for example:
"GlobalWorkersSmallLaunchConf":{
"Type":"AWS::AutoScaling::LaunchConfiguration",
"Properties":{"KeyName":{"Ref":"MyKeyName"},
"ImageId":"ami-SomeAmi",
"UserData":{"Fn::Base64":{"Fn::Join":["",[{"Ref":"SomeInitScript"}]]}},
"SecurityGroups":[{"Ref":"InstanceSecurityGroup"}],
"InstanceType":"m1.small",
"InstanceMonitoring":"false"
}
}
BTW- this wasn't available through the dashboard until last week.
CloudFormation does not provide any feature which you cannot do from AWS Console. Can you provide number of instance to be created when you are creating it from the AWS Console? No, you cannot.
There is an option in AWS Console to specify number of instances to be created.
But, There is no such option Cloudforamtion for AWS::EC2::Instance.