Overriding default key bindings in Sublime Text 2/Plain Tasks - sublimetext2

I have a problem overriding some default key mapping for PlainTasks plugin in ST2. The plugin defines alt+c and alt+o shortcuts that I use for inputing Polish characters, so I added the following lines to my Packages/User/Default (OSX).sublime-keymap:
// ć and ó for PlainTasks
{ "keys": ["super+alt+c"], "command": "plain_tasks_cancel", "context": [{"key": "selector", "operator": "equal", "operand": "text.todo" }] },
{ "keys": ["super+alt+o"], "command": "plain_tasks_open_link","context": [{ "key": "selector", "operator": "equal", "operand": "text.todo" }] }
However, PlainTasks keeps ignoring my own settings. If I change Packages/PlainTasks/Default (OSX).sublime-keymap, it will be overwritten with defaults the next time I open ST2 or the next time Package Control updates the packages, I'm not sure.
And ideas why this happens?

I took a quick look at the key bindings and they use context appropriately, so there is no problem with them reusing the super+d binding. In addition to your rebindings for the plain task commands, you also need to do rebind the input keys in your user key bindings. Insert the following entries in your user key bindings as well.
{"keys": ["alt+c"], "command": "insert", "args": {"characters": "ć"}},
{"keys": ["alt+o"], "command": "insert", "args": {"characters": "ó"}}

I would suggest opening a new issue on Github requesting they change the key bindings. You can reference this part of the documentation that says Option+<alphanum> should not be used for any OS X key bindings, as it causes the exact problem you're seeing.
You might also want to point them toward skuroda'sFindKeyConflicts plugin, as I noticed that at least one of their key bindings (⌘D) conflicts with a built-in Sublime shortcut (expand selection to word).

I had a similar problem, but the solution of skuroda didn't help me.
The reason, I belive, is that on "Polish programmer keyboard" the right alt key is actually mapped to ctrl+alt combo. Therefore I had to put "ctrl+alt+c" as keys.
I found this hint at https://www.opensoft.com.pl/article/sublime-keys

Related

How to specify multiple file types for Azure Function Blob Trigger input binding?

I'm looking to only allow the upload of specific filetypes to Azure Storage to trigger an Azure Function.
Current function.json file:
{
"scriptFile": "__init__.py",
"bindings": [{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "{name}.json",
"connection": "storage-dev"
}]
}
Would I just add another path value like this...
"path": "{name}.json",
"path": "{name}.csv"
...or an array of values like this...
"path": [
"{name}.csv",
"{name}.json"
]
Can't seem to find an example in the docs.
EDIT:
Thank you #BowmanZhu! Your guidance was awesome.
Changed trigger to EventGrid
Actually was able to create a single Advanced Filter rather than create multiple Subscriptions:
You want a blobtrigger to monitor two or more paths at the same time.
I can tell you simply, it's impossible. This is why you can't find the relevant documentation, because there is no such thing. If you must use blobtrigger at the same time according to your requirements, you can only use multiple blobtrigger.
But you have another option: eventgridtrigger:
You just need to create multiple event grid, and let them point to the same endpoint function.

Parameter group changes not reflecting on Aurora Serverless DB cluster

I'm trying to update the binlog_format parameter in my Aurora mysql 5.6.10 (Data API enabled) instance to ROW but I'm not able to change it.
I've updated my custom parameter group accordingly but those changes do not reflect on the cluster when I run show variables like 'binlog_format'.
Right after changing the parameter group, the cluster goes into Modifying state but after that finishes the parameter hasn't been updated.
I can't seem to find an option to reboot or stop the cluster on the AWS UI.
Using the CLI, I get this error trying to stop the cluster: An error occurred (InvalidDBClusterStateFault) when calling the StopDBCluster operation: Stop-db-cluster is not supported for these configurations.
Tried changing the capacity settings but that didn't do anything.
Is there any other way I'm missing?
You'll have to check if the specific property modification is supported by serverless engine or not by running this command:
aws rds describe-db-cluster-parameters --db-cluster-parameter-group-name <param-group-name>
If you read the output from above statement, it says 'provisioned' for SupportedEngineModes:
{
"ParameterName": "binlog_format",
"ParameterValue": "OFF",
"Description": "Binary logging format for replication",
"Source": "system",
"ApplyType": "static",
"DataType": "string",
"AllowedValues": "ROW,STATEMENT,MIXED,OFF",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned"
]
}
Ideal state is something like this for a modifiable parameter:
{
"ParameterName": "character_set_server",
"Description": "The server's default character set.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "string",
"AllowedValues": "big5,dec8,cp850,hp8,koi8r,latin1,latin2,swe7,ascii,ujis,sjis,hebrew,tis620,euckr,koi8u,gb2312,greek,cp1250,gbk,latin5,armscii8,utf8,ucs2,cp866,keybcs2,macce,macroman,cp852,latin7,utf8mb4,cp1251,utf16,cp1256,cp1257,utf32,binary,geostd8,cp932,eucjpms",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned",
"serverless"
]
},
Aurora does support Start and Stop APIs now, so I'm surprised that you were not able to use it.
https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-aurora-stop-and-start/
Can you try using them through CLI?
On a separate note, if you just want to reboot the engine for the param change to flow in, you just need to use the reboot-db-instance API.
https://docs.aws.amazon.com/cli/latest/reference/rds/reboot-db-instance.html

Which data-structure on CouchDB with three entities (User, Folder, Files)?

I'm trying to build a "relationship" in CouchDB for a Dropbox-like scenario with:
Users
Folders
Files
So far I'm struggeling whether to reference or embed the above things and haven't tackled permissions yet. In my scenario I just want to store the path to the files and don't want to work with attachments. Here's what I have:
Option 1 (Separate Documents)
Here I chain just everything together and it (at least for me) seems to be a copy of a RDBMS model which should not be the goal when using NoSQL.
{
"id": "user1",
"type": "user",
"folders": [
"folder1",
"folder2"
]
}
{
"id": "folder1",
"type": "folder",
"path": "\\user1\\pictures",
"files": [
"file1",
"file2"
]
}
{
"id": "file1",
"type": "file",
"name": "myDoc.txt",
}
Option 2 (Separate Documents)
In this option I would leave the users document as it is and put into the folders document the users id for the purpose of referencing.
{
"id": "user1",
"type": "user",
}
{
"id": "folder1",
"type": "folder",
"path": "\\user1\\pictures",
"owner" "user1",
"files": [
"file1",
"file2"
]
}
{
"id": "file1",
"type": "file",
"name": "myDoc.txt",
}
Option 3 (Embedded Documents)
Similar to Option 2 I here would dismiss the the third document type files and embed everything into the folder document. I read that it is only an option if I don't have to many items to store and I don't know how much items a user will store for example.
{
"id": "user1",
"type": "user",
}
{
"id": "folder1",
"type": "folder",
"path": "\\user1\\pictures",
"owner" "user1",
"files": [{
"id": "file1",
"type": "file",
"name": "myDoc1.txt"
}, {
"id": "file2",
"type": "file",
"name": "myDoc2.txt"
}
]
}
Option 4
I could also put everything in just one document but in this scenario this makes no sense. The JSON documents would get to big in time and thats not something which is desirable in regards to performance / load-time.
Conclusion
For me none of the above options seem to fit my scenario and I would appreciate some input from you in how to design a proper database schema in CouchDB. Or maybe one of the above options is already a good start and I just don't see it.
To provide you with a concrete idea, I'd model a Dropbox clone somehow like this:
Shares: The root folder that is shared. There is no need to model subfolders, as they don't have different permissions. Here I can set the physical location of the folder and the users that are allowed to use them. I'd expect that there are only a few shares per user, so you can keep the list of shares in memory.
Files: The actual files in the share. Depending on your use case, there's no need to keep the files in a database, as the filesystem itself is already a great file database by itself! If you need to hash and deduplicate files (such as Dropbox does it), then you might create a cache in CouchDB.
This would be the document structure:
{
"_id": "share.pictures",
"type": "share",
"owner": "Alice",
"writers": ["Bob", "Carl"],
"readers": ["Dorie", "Eve", "Fred"],
"rootPath": "\\user1\pictures"
},
{
"_id": "file.2z32236e2sdwhatever",
"type": "file",
"path": ["vacations", "2017 maui"],
"filename": "DSC1234.jpg",
"size": 12356789,
"hash": "1235a",
"createdAt": "2017-07-29T15:03:20.000Z",
"share": "share.pictures"
},
{
"_id": "file.sdfwhatever",
"type": "file",
"path": ["vacations", "2015 alaska"],
"filename": "DSC12345.jpg",
"size": 11,
"hash": "acd5a",
"createdAt": "2017-07-29T15:03:20.000Z",
"share": "share.pictures"
}
This way you can build a CouchDB view of files by share and path and query it by folder:
function (doc) {
if (doc.type === 'file') emit([doc.share].concat(doc.path), doc.size);
}
If you want, you can add also add a reduce function with just _sum and get a hierarchical size calculator for free (well, almost)!
Assuming you called the database 'dropclone' and added the view to a design document called 'dropclone' with the view name 'files', you would query it like this:
http://localhost:5984/dropclone/_design/dropclone/_view/files?key=["share.pictures","vacations"]
You'd get 123456800 as a result.
For
http://localhost:5984/dropclone/_design/dropclone/_view/files?key=["share.pictures","vacations"]&reduce=false&include_docs=true
You would get both files as a result.
You can also add the whole share name and path into the _id, because then you can directly access each file just by the known path. You can still add the path redundantly or leave it out and just split the _id into its path component dynamically.
Other approaches would be:
Use one CouchDB database per share and use CouchDB's _security mechanism to manage the access.
Split files into chunks, hash them and store the chunk hashes for each file. This way you can virtualize and deduplicate the complete file system. This is what Dropbox does behind the scenes to save storage space.
One thing you shouldn't do is store the files themselves into CouchDB, this will get dirty quite quickly. NPM had to experience that some years ago, and they had to move away from this model in a huge engineering effort.
Data Modeling starts with the queries the application will use.
If your queries will be that a user sees all his/her folders, and opening a folder displays all docs and sub-folders beneath it, the option 1 is a nature fit to the queries.
However, there is one very important question you need to answer first, especially for CouchDB. Which is how large you database will be. If you will need a DB partitioned across multiple nodes, then the performance would suffer, possibly to a point that DB becomes unresponsive. Because opening a folder with many docs would mean searching every partition. This is due to the partitioning is decided by the hashing of the ID which user has no control. The performance will be fine for a small single node (or non partitioned) DB.
Option 2 requires you build index on "owner", which suffers for the same reason as option 1.
Options 3/4 are kind of denormalization, which addressed the above performance issue. If the docs are large and updated often, the overhead of storage and cost of compaction may be significant. You need bench-marking for your specific workloads.
In summary, if your target DB will be big and partitioned, then there is no easy answer. Careful prototype and bench-marking would be needed.

Map or Array for RESTful design of finite, unordered collection?

A coworker and I are in a heated debate regarding the design of a REST service. For most of our API, GET calls to collections return something like this:
GET /resource
[
{ "id": 1, ... },
{ "id": 2, ... },
{ "id": 3, ... },
...
]
We now must implement a call to a collection of properties whose identifying attribute is "name" (not "id" as in the example above). Furthermore, there is a finite set of properties and the order in which they are sent will never matter. The spec I came up with looks like this:
GET /properties
[
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
]
My coworker thinks it should be a map:
GET /properties
{
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
}
I cite consistency with the rest of the API as the reason to format the response collection my way, while he cites that this particular collection is finite and the order does not matter. My question is, which design best adheres to RESTful design and why?
IIRC how you return the properties of a resource does not matter in a RESTful approach.
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
From an API client point of view I would prefer your solution, considering it is explicitly stating that the name of a property is XYZ.
Whereas your coworkers solution would imply it is the name, but how would I know for sure (without reading the API documenation). Try not to assume anything regarding your consuming clients, just because you know what it means (and probably is easy enough to assume to what it means) it might not be so obvious for your clients.
And on top of that, it could break consuming clients if you are ever to decide to revert that value from being a name back to ID. Which in this case you have done already in the past. Now all the clients need to change their code, whereas they would not have to in your solution, unless they need the newly added id (or some other property).
To me the approach would depend on how you need to use the data. Are the property names known before hand by the consuming system, such that having a map lookup could be used to directly access the record you want without needing to iterate over each item? Would there be a method such as...
GET /properties/{PROPERTY_NAME}
If you need to look up properties by name and that sort of method is NOT available, then I would agree with the map approach, otherwise, I would go with the array approach to provide consistent results when querying the resource for a full collection.
I think returning a map is fine as long as the result is not paginated or sorted server side.
If you need the result to be paginated and sorted on the server side, going for the list approach is a much safer bet, as not all clients might preserve the order of a map.
In fact in JavaScript there is no built in guarantee that maps will stay sorted (see also https://stackoverflow.com/a/5467142/817385).
The client would need to implement some logic to restore the sort order, which can become especially painful when server and client are using different collations for sorting.
Example
// server sent response sorted with german collation
var map = {
'ä':{'first':'first'},
'z':{'second':'second'}
}
// but we sort the keys with the default unicode collation algorigthm
Object.keys(map).sort().forEach(function(key){console.log(map[key])})
// Object {second: "second"}
// Object {first: "first"}
A bit late to the party, but for whoever stumbles upon this with similar struggles...
I would definitely agree that consistency is very important and would generally say that an array is the most appropriate way to represent a list. Also APIs should be designed to be useful in general, preferably without optimizing for a specific use-case. Sure, it could make implementing the use-case you're facing today a bit easier but it will probably make you want to hit yourself when you're implementing a different one tomorrow. All that being said, of course for quite some applications the map-formed response would just be easier (and possibly faster) to work with.
Consider:
GET /properties
[
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
]
and
GET /properties/*
{
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
}
So / gives you a list whereas /* gives you a map. You might read the * in /* as a wildcard for the identifier, so you're actually requesting the entities rather than the collection. The keys in the response map are simply the expansions of that wildcard.
This way you can maintain consistency across your API while the client can still enjoy the map-format response when preferred. Also you could probably implement both options with very little extra code on your server side.

Google Apps Script: How to set "Use column A as labels" in chart embedded in spreadsheet?

I am using Google Apps Script and EmbeddedChartBuilder to embed line charts within my Google Spreadsheet. When you create these charts by hand, you have the (non-default) option to "Use column A as labels" (where "A" is the first column in the data range). I cannot find a way to do the same from a script. From the Google Visualization Line Chart documentation, it appears that the default is to treat the first column as having the "domain" role; but EmbeddedChartBuilder seems to override this and give all columns the "data" role. Since I don't have an explicit DataTable, I have no way to set the column roles myself.
Have I missed a way to do this? Or do I have to switch approaches from EmbeddedChartBuilder to using the spreadsheet as a data source?
Found it! Set the option useFirstColumnAsDomain to true with EmbeddedChartBuilder.setOption.
This option appears to be undocumented. I found it by going to "Publish chart" (click on the chart, then select from the drop-down in the top right) and inspecting the JavaScript data structure in the given code. To be exact, I created a chart with "Use column A as labels" unchecked, grabbed the publish data structure, then checked "use column A as labels", grabbed the new publish data structure, and compared the two. To compare, I suggest normalizing the JSON and running diff. This technique can be used to reverse-engineer any settings in the chart editor.
I just experienced this same problem. I took a similar approach as Andrew, but found a different solution (presumably because Google has added functionality to their graphs in Spreadsheets).
I used http://jsbeautifier.org/ to format the code after publishing, and I found this part to be responsible for adding the data labels. Note that you can even change the color of the stem that connects the bar to the data label:
"series": {
"0": {
"errorBars": {
"errorType": "none"
},
"dataLabel": "value",
"annotations": {
"stemColor": "none"
}
},
"color": "black",
"targetAxisIndex": 0,
"annotations": {
"textStyle": {
"color": "red",
"fontSize": 12
}
}
},