I am building my Cloudformation template to create an S3 Bucket.
I wanted to create folders in the bucket at the same time but I have read that I need to use a lambda backed resource.
So I've prepared the lambda part of my template but I need to add a condition :
If the lambda refers to a bucket which already exists
The bucket already exists and it has been created in this ( everything has to reside in one cloudformation stack) file
Call the lambda to create my folders.
I do not want to check if my bucket exists in S3 or if my folders already exist as S3 objects in the bucket.
I would like the lambda backed resource to be created after the bucket has been created.
First of all - why you need directories at all? S3 is in fact a key-value store, "paths" are just prefixes. Usually there is no benefit of doing so other than human-friendly presentation.
Secondly - you can either use DependsOn to enforce proper order or resources provisioning, or (that would be a good practice I think) if you make Lambda generic and accept bucket name in your custom resource parameter, you just do it by using the Ref function, which implicitly creates dependency.
Related
I am wondering of what is the best practise for handling new version of the same model in the Data Management API Bucket system
Currently, I have one bucket per user and the files with same name overwrites the existing model when doing a svf/svf2 conversion.
In order to handle model versioning in be the best manner, should I :
create one bucket per file converted
or
continue with one bucket per user.
If 1): is there a limitation of number of buckets which is possible to create?
else 2): How do I get the translation to accept an bucketKey different than the file name? (As it is now, the uploaded file need to be the filename to get the translation going.)
In advance, cheers for the assistance.
In order to translate a file, you do not have to keep the original file name, but you do need to keep the file extension (e.g. *.rvt), so that the Model Derivative service knows which translator to use. So you could just create files with different names: perhaps add a suffix like "_v1" etc or generate random names and keep track of which file is what version of what model in a database. Up to you.
There is no limit on number of buckets, but it might be an overkill to have a separate one for each file.
I would like to know if it is possible to rename a Bucket.
If not, I would like to know if I can move all my models on the bucket I want to rename to a new bucket without translating each model again.
Thanks.
Unfortunately, it is not possible to rename a bucket, but it is possible to copy files (objects) across bucket with this API
For the viewables, it is a different story - they are not stored in OSS buckets, but on the Model Derivatives server. It means, you either need to translate them again if you want to use the new URN, or leave them where they are and map the old and new URNs. Viewables are destroyed only when you delete their manifest.
In spring-cloud-config, it is possible to configure a properties file and then fetch an element in the properties file using the key.
But in aws parameter store, each key value is stored as a separate entry. As i understand, i need to take each key-value from the properties file and then configure in parameter store.
In reality, each region (DEV, QA etc.) has a set of configuration files. Each configuration file has a set of properties in it. These files can be grouped based on the hierarchy support that parameter store provides.
SDLC REGION >> FILE >> KEY-VALUE ENTRIES
SDLC region can be supported by hierarchy. Key-value entries are supported by parameter store. How do we manage the FILE entity in parameter store?
you can use the path hierarchy and get parameters by path or by name prefix.
e.g.
SDLC_REGION/FILE/param1
SDLC_REGION/FILE/param2
SDLC_REGION2/FILE/param1
SDLC_REGION2/FILE/param2
then you can get by path: /SDLC_REGION/FILE to get all it parameters
another option is using tags
I am developing a node.js application. I am using an AWS EC2 instance with MySQL. I am using Amazon S3 for my storage. In my application, each user has a repository. Each repository has multiple folders and each folder has multiple files.
Is it a good idea to programmatically create an S3 folder for each user to achieve a directory structure?
Amazon offloads the pain to create the parent and nested sub folders when you have to put the keys inside multiple sub folders.
You can certainly consider using folders programmatically.
For instance
If you want to create a file under subdolder, then under a subsubfolder- you can simply put key as subfolder/subsubfolder/file.txt
The operation would be performing like -
create if not exists
I have too many modules (around 90) in my project.
But I want to keep individual displaytag.properties file for each module rather than having single file for whole project.
How to achieve this.
I am using struts2
I think that you can configure each displaytag using the appropiate bundle, remember the bundle search order from S2 docs:
ActionClass.properties Interface.properties
Interface.properties (every interface and sub-interface)
BaseClass.properties (all the way to Object.properties)
ModelDriven's model (if implements ModelDriven), for the model object repeat from 1
package.properties (of the directory where class is located and every parent directory all the way to the root directory)
search up the i18n message key hierarchy itself
global resource properties
and from the docs for the DisplayTag library:
For the whole web application, create a custom properties file named "displaytag.properties" and place it in the application classpath. Displaytag will use the locale of the request object to determine the locale of the property file to use; if the key required does not exist in the specified file, the key will be loaded from a more general property file.
so i guess that the displaytag will search the config keys in the s2 available bundles.