Does the Amazon cloudfront domain name ever change? - html

I've setup a cloudfront distribution pointing to my EC2 instance (web server), tucked inside a ELB.
I'm an unsure if this cloudfront domain name will ever change? (I.E when enabling/disabling it, attaching/reattaching etc.

So long as the distribution still exists, it will use the same domain on cloudfront (the one assigned to you at the distribution's creation time). This cloudfront domain will only be lost when a distribution is deleted:
A disabled distribution is no longer functional and you will no longer
be charged, but it can be enabled again at any time. A deleted
distribution is no longer accessible and is lost forever.
If you want to preserve the assigned cloudfront domain but temporarily disable access to it, use the AWS console to disable it - not delete it.
The only reason I've had to delete distributions is for general cleanup of old ones, or when I've been close to their limit of 100 distributions per account (which has only happened once to me).
For even greater control, you also have the option of mapping your own domain name to a distribution using a CNAME record:
In CloudFront, an alternate domain name, also known as a CNAME, lets
you use your own domain name (for example, www.example.com) for links
to your objects instead of using the domain name that CloudFront
assigns to your distribution.
The above is the approach I use because I like more control over resources, but depending on your use case it might be overkill since the original distribution name won't change unless you delete it.

Related

Openshift HAproxy sticky session issue

I have a deployment with 2 pods of a web application. The web application requires a logon and maintains a session.
After i kill the first pod i am automatically redirected to the logon page of the second pod, but when the first pod is loaded again i am redirected back to it.
I have tried to use the HAproxy "balance source" algorithm and coockies.
Any idea why doesn't it stay with the second pod?
balance source uses a hashing algorithm that changes the workload distribution every time the number of available backends changes, because that is what it's designed to do. If you had more than 2 backends, you would also find that taking down any one backend will cause some traffic that wasn't even hitting the impacted backend to shift to another, because of this redistribution.
If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-balance
For an explanation of why you didn't see the expected behavior when using cookies instead of balance source, we'd need to see your configuration.

Constants over multiple servers

The question is, how to update a constant? This sounds like a stupid question, but let's look at the background of my issue:
Background
I manage a network of servers, which includes a MySQL server, multiple HTTP servers, and a Minecraft server (a self-hosted server that gamers who have installed Minecraft can connect to and play together). All of the user-end services (HTTP servers, Minecraft server, user apps) are directly or indirectly related to the MySQL server. The MySQL database stores different data for each player account, for example, the online/offline status of players, etc.
In programming, constants are used to create a general reference to a value that will not change across a runtime. Especially, for software-internal identifiers, such as data flags, bitmasks, etc. In my case, I also use constants to store specific data, such as the MySQL server's address and other credentials. So when I want to change the server address, I only need to modify them from one point, for example, an internal constants.php of the server.
Problem
When I migrate my MySQL database to another host or change password, I have to update the details on every server. It is not possible to create a centralized data provider that provides the server address, because the MySQL server itself is the centralized data provider. That means, every time I change the value, I must update all servers. I must also maintain a very private and local list (probably has to be written down on a memo and stick it on my computer!) of all these places, because it is really hard to locate all these references. So, my question is, is there a better way of management that allows me to change the values from one place? Note that the servers are on different hosts, so it is not possible to put it in a local file, and it doesn't sound reasonable to create a centralized data provider (call it password provider) to provide access of the real centralized data provider (MySQL) either, since if I have the need to change the MySQL database details, I have the same need to change the password provider details as well.
This is less a concern. but since it is a similar question, I am putting it down here too. I use integer bitmasks to store player ranks. For example, if the player is a VIP, he has a 0x01 flag, and if the player is a moderator, he has a 0x10 flag, and 0x11 if both VIP and moderator, vice versa. I want to refactor the bitmask values as well, but it would be great trouble, because I need to shut down all servers and update the MySQL values, update constants on every server, then restart all servers, to avoid potential security vulnerability in the period of updating. Is there a more convenient way to do that?
This is a network management question too, but I consider it more programming-related.
We are talking about deployment system. For example we can use
capistrano: https://github.com/capistrano/capistrano. We need to
save constants.php to git and create task in capistrano for deploy
this file to each server. I use this tool for deploying of projects which are one of the 50 busiest sites of the Russian segment of the Internet:)
We are talking about data migration. So there are several ways. Some of them with downtimes and some not (sometimes it depends on the situation).
Data migration without downtime:
modify your app so it will understand old variant of players bitmask
and new one
deploy modified app
update bitmask into your databases
modify your app so it will understand only new variant of bitmasks
deploy modified app

how do you add additional nics to a compute engine vm?

how do I add a NIC to a compute engine instance? I need more then one NIC so I can build out an environment...I've looked all over and there is nothing on how to do it...
I know it's probably some API call through the SDK, but I have no idea, and I can't find anything on it.
EDIT:
It's the rhel6 image. figured I should clarify.
The question is probably old and a lot has changed since. Now it's definitely possible to add more nics to an instance but only at creation time (you can find a networking tab on the create instance page on the portal - corresponding rest api exists too). Each nic has to connect to a different virtual network, so you need to create more before creating the instance (if you don't have already).
Do you need an external address or an internal address? If external, you can use gcutil to add an IP address to an existing instance. If internal, you can configure a static network address on the instance, and add a route entry to send traffic for that address to that instance.
I was looking for similiar thing (to have a VM which runs Apache and nginx simultaneously on different IPs), but it seems like although you can have multiple networks (up to 5) in a project and each network can belong to multiple instances, you can not have more than one network per instance. From the documentation:
A project can contain multiple networks and each network can have multiple instances attached to it. [...] A network belongs to only one project and each instance can only belong to one network.

Should I move client configuration data to the server?

I have a client software program used to launch alarms through a central server. At first it stored configuration data in registry entries, now in a configuration XML file. This configuration information consists of Alarm number, alarm group, hotkey combinations, and such.
This client connects to a server using a TCP socket, which it uses to communicate this configuration to the server. In the next generation of this program, I'm considering moving all configuration information to the server, which stores all of its information in a SQL database.
I envision using some form of web interface to communicate with the server and setup the clients, rather than the current method, which is to either configure the client software on the machine through a control panel, or on install to ether push out an xml file, or pass command line parameters to the MSI. I'm thinking now the only information I would want to specify on install would be the path to the server. Each workstation would be identified by computer name, and configured through the server.
Are there any problems or potential drawbacks of this approach? The main goal is to centralize configuration and make it easier to make changes later, because our software is usually managed by one or two people at most.
Other than allowing for the client to function offline (if such a possibility makes sense for your application), there doesn't appear to be any drawback of moving the configuration to a centralized location. Indeed even with a centralized location, a feature can be added in the client to cache the last known configuration, for use when the client is offline).
In case you implement a [centralized] database design, I suggest to consider storing configuration parameters in an Entity-Attribute-Value (EAV) structure as this schema is particularly well suited for parameters. In particular it allows easy addition and removal of particular parameters and also the handling parameters as a list (paving the way for a list-oriented display as well in the UI, and therefore no changes needed in the UI either when new types of parameters are introduced).
Another reason why configuartion parameter collections and EAV schemas work well together is that even with very many users and configuration points, the configuration data remains small enough that is doesn't suffer some of the limitations of EAV with "big" tables.
Only thing that comes to mind is security of the information. In either case you probably have that issue though. Probably be easier to interface with though with a database as everything would be in one spot.

Simple, secure API authentication system

I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'