I currently have two project stage and prod all my docker images are managed using container repository.
I would like to be able to deploy my images from prod to staging to app engine.
It looks like the best practices for this would be to create a service account that has access to google storage on prod.
I have done that but I'm not sure how to integrate that into my CI pipeline when I'm already logged into gcloud using a staging account. Also, how do I get app engine to pull from that repo?
All images are indeed stored on a bucket called artifacts.[PROJECT-ID].appspot.com. When using CI, make sure that you either added project-wide Storage Object Viewer or defined this role for the service account directly on the bucket (or separately on files).
When using AppEngine, there is also a Service Account called [PROJECT-ID]#appspot.gserviceaccount.com. Try to give access to the bucket to this SA as well so it can pull images into AE.
Related
I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.
As my title states, we are using the AWS .NET SDK and on our web.config configured a profile that points to a credentials file(see: https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/net-dg-config-creds.html using credentials file) on the disk(so out of the source code). This seems to work fine but we are rotating these keys every x period so we need to change the keys within the file. My question is does de AWS .NET SDK notice that the file is changed and automatically load the new credentials or when does it actually load? In other words, if we change the credentials in this file do we need to do additional steps for the application to actually use them?
What I tried now is start up the application locally, change the credentials to a faulty one and calls are still going thru without a problem. Next, I stopped my application and rebuilded in with the same file having faulty credentials. After doing this the application is still able to make correct calls so I'm wondering how this works as if it is falling back on credentials that did work. Or maybe I just didn't test right.
We are using .net framework 4.6.2 application using the aws sdk version 3.3
Also what i forgot to mention is that for each request we initialize the client like this:
using (AmazonCognitoIdentityProviderClient client = new AmazonCognitoIdentityProviderClient(regionEndpoint))
Short answer is creating a client like that will cause the credentials to be read from the credentials file when the first client is created.
The longer answer is when you create without credentials the client uses the FallbackCredentialsFactory class to find credentials either through the credentials file or environment like EC2 instance metadata. The FallbackCredentialsFactory has a static instance of Amazon.Runtime.CredentialManagement.CredentialProfileStoreChain which is what gets the credentials for a profile.
If you want to something different you could have your code create an instance of CredentialProfileStoreChain before creating a client and use that to get the credentials and pass those credentials into the client.
I have built an e-commerce website on my local computer that uses Django version 2.2 and python 3.7.
The website consists of:
fancyfetish is the main project directory.
The apps, (cart, users, baseapp, products, blog) are all stored in their own directory 'apps.
Within the settings folder I have three settings files:
- production.py
- base.py
- development.py
The static file in the main directory is where I put collectstatic files.
Media is where I store externally uploaded images (product images for example)
Docs is just random bits like a hand drawn site layout.
Static files like JS and CSS are stored within baseapp, within apps.
I want to host this website on Amazon Web Services, and I assume I need to use Elastic Beanstalk. I went through the process of trying to host with free version of EB, installed the EB CLI, and after using eb create and eb deploy on the CLI my website appeared.
However, the static files didn't load properly in the first instance because I had not properly configured DJANGO_SETTINGS_MODULE. I have now done this. But before deploying I added eb migrate functionality so that I could also migrate my database.
This seems to have messed everything up. I can no longer deploy because there is a DATABASE error, which I expected. The error said 'Not able to connect to MySQL database through 'localhost'. Well, of course it cant.
So, in order to deploy my site on AWS I needed to configure the databases, because with the eb migrate functionality it will no longer deploy without trying to also connect to my database using the settings I have configured.
I have so far, whilst in development mode, connected my project to MySQL and everything is running perfectly on localhost, with my models transferring beautifully to the database just as I would like.
I worked out that I need to create a database on AWS, obviously. So I set up an RDS. I didn't link it to my deployed application because it would appear that the application doesn't have an environment that I can see when I log into my console. So where my project has been deploying to I don't know, because it doesn't look like the CLI version is connected to the online version in my console.
So I thought I'd deal with that problem later and work out how to make a database, which I managed to do. However, migrating the database I already have up and running on MySQL to my RDS database seems impossible, and there are not very good instructions. Let alone trying to then connect said database to my deployed application, which doesn't seem to sync with my local app.
So, I have ended up deleting everything because I was becoming so confused, with so many new directories (.ebextensions etc etc) and a database that wont connect, a project that won't deploy, a database that wont point to my project etc. I ended up created an EC2 folder and all sorts, getting myself massively confused with what I actually need to do to make this whole thing work.
If any part of this ramble makes any sense to anyone out there, and you yourself have managed to deploy a larger django project to AWS and keep your existing databases then please do let me know. But I have a feeling this may be a long shot.
Basically I need a step by step list of what to do to deploy:
For example:
1) Create an elastic beanstalk instance
2) Create an environment on CLI that syncs to the one in my AWS console
etc
etc
(With how to's if you possibly have the time!)
Thank you, and I am so sorry for being so confused by something that may be simple
Edited to show my process:
I have built a directory called .ebextensions with a file within it called django.config with the following content:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: fancyfetish/wsgi.py
I have run the following command:
eb init -p python-3.6 fancyfetish
There was no output as a result of this in the terminal, however a directory was created called .elasticbeanstalk with one file in it called config.yml
I then typed eb init to create an SSH key pair and there was no output from this command at all:
As you can see I have tried doing this several times.
Instead I created a key pair manually within AWS console and a file automatically to my computer called keyname.pem
I then typed into the console
chmon 400 path/to/key/keyname.pem
This provided no output on the terminal so I cannot know if it worked.
I moved the downloaded SSH file into the .SSH directory in the Home directory of my computer, and then in the terminal typed:
eb init -k nameofkey
The output was:
WARNING: Uploaded SSH public key for "fancyfet" into EC2 for region us-
west-2.
I then went on to type
eb create fancyfet-env
And an environment was created with the following output:
I know that this has to do with databases and connecting to MySQL.
I then typed:
eb deploy
With the following output:
So now comes the bit where I get stuck, successfully creating a database that connects to my already existing database that is populated with database in MySQL, and connecting the project to the database.
HELP!(Thank you so much!)
I have a static website currently hosted on AWS and I suppose its static (i.e. I can't update it without manually changing the HTML and then reuploading to AWS). I want to make it easier for myself to update certain sections (particularly the 'dates' section). So I was thinking of using a JSON object. Ideally the AWS website would be able to update from a JSON file on my local/personal computer but I'm not sure if that's possible? Do I need the JSON file to be uploaded to a web server/AWS every time I change it? I would like to just update my JSON file locally and not have to change/update anything in AWS. Is this possible or do I need some type of API?
From what I get from your question, I can think of the below two use-cases:
1) In case your static website is hosted on S3, you can use the AWS CLI (Command Line Interface): https://aws.amazon.com/cli/. This will allow you to upload the files directly from your local machine to the S3 bucket.
2) In case it is hosted on an EC2 Instance, you can setup a git repository for your website and push the changes made to the git server. The same git server can then be used to pull the latest changes on your EC2 instance.
I want to setup seperate amazon ec2 instance where i store all my images uploaded via my website by users. I want to be able to show images from this exclusive server. I know how to setup DNS names which would point to this server. But i would like to know how to setup the directories, for example if i refer to an image url as http://images.mydomain.com/images/sample.jpg, then
images.mydomain.com is the server name and
images should be the folder name
now the question is should a webserver be running on this server which is what will serve the images or can i just make images folder public so that it is visible to entire world? How do avoid directory listing?
Pointer to any documentation would be greatly appreciated.
It certainly is possible to set up a separate EC2 instances to serve your images. You may have good reasons to do that--for example, you may want to authorize only specific users or groups of users to access certain images, in a way that's closely controlled by program logic.
OTOH, if you're just looking to segment the access of image/media files away from the server that provides HTML/web content, you will get much better performance / scalability by moving those files to a service that is specifically tuned for storage and web access. Amazon's S3 (Simple Storage Service) is one relatively straightforward option. Amazon's CloudFront content distribution network (CDN) or a competing CDN would be an even higher performance option.
Using a CDN for file access does add the complexity of configuring the CDN, but if you're going to the trouble of segmenting media access from your primary web server, and if you're expecting any significant I/O load, I've found it to be a high-return-for-effort-expended approach.
I would definitely not implement this as you are planning. You should store all your images in an Amazon S3 bucket and serve them via Amazon's CloudFront CDN. Why go through the hassle of setting up and maintaining an EC2 instance to do what Amazon has already done? S3 provides infinite storage, manages permissions, metadata, etc. CloudFront provides fast access to your images, caching them at edge locations all around the world. Additionally, you can use Amazon Route 53 (or some other DNS service) to point various CNAMEs to your CloudFront distribution.
If you're interested in this approach I'd be happy to provide more info on how to set this up.
Yes, you will definitly need to run a webserver on the machine. Otherwise it will not bepossible for clients to connect via http/port 80 and view the images in a browser. This has nothing to do with directory listing enabled. Once you have a webserver running, you can disable directory listing in its configuration.
Install an apache on your server and run it (http://httpd.apache.org/docs/2.0/install.html). You then setup what's called a 'site' in its configuration which is pointing to a local directory which will then be the base directory for your server. It could, for example, be /home/apache on a Unix system. There you create your images folder. If your apache is setup correctly you can then access your images via http://images.mydomain.com/images/sample.jpg.