I am trying to understand some basic concepts as I am still new to the concept of service discovery and cloud programming (excuse the cliché). A question that has been in my head for some time is: are service discovery solutions like Consul, etcd & Zookeeper responsible for providing service credentials as well?
For example, if we have a web application which queries information about the location of database server(s), who is responsible for providing it with the credentials (username, password) for connecting to it? I do know that this is probably subjective but I would be glad to learn more about best practices related to that.
Indeed, see Consul and Vault. Now, for the reasoning: service registries typically don't come with a full-fledged set of ACLs, etcetera, to protect secrets, plus they gossip said secrets around the network, dump them left and right on disk - it's a security nightmare. You want to make sure that access is as limited as possible, strictly on a need-to-know basis. Therefore, use some specific tool to do that - Hardware Security Modules, Vault, Chef encrypted databags, and so on.
The tools Consul and Vault do exactly the split you propose: Consul for service discovery, Vault for sharing secrets (and implementing something like a lease and dynamic secrets).
Please read about these two tools to see if this concept works for you.
Related
i have some instance of azure api management.
I didn't find on azure doc, so i have on api management setting:
both logging option enabled with all setting inside enabled. With actual configuration, both option enabled, i'm paying double for log?
Other question is, which kind of unuseful log i can disable on apim to save cost?
Regards
In order to save costs, you may use only App Insights for logging. However, please keep in mind that one does not replace the other fully, although there is some overlap.
App Insights is a part of Azure Monitor, however, it acts as an automatic problem monitor as opposed to having a deep insight over your Azure infrastructure.
If you do not require in your day-to-day activities a deeper insight of the APIs, such as telemetry analysis for example, then perhaps Azure Monitor is not needed.
There is also the option of setting for each specific API and/or APIM instance, so if you run into performance issues on one you might want to use Azure Monitor, as opposed to only using App Insights.
Without deeper insight on the specifics of your business and infrastructure, my advice is to continue with both for the production instance, and cut back on Azure Monitor for dev and staging if you'd like.
I am making a Javafx program and need to use a small mySQL database. Currently I am hosting one on my computer but I can't access it on other computers on other networks. I need the mySQL server to be accessible from anywhere. How do I host one that does that? Thanks in advance, all help is welcome.
Well you have a few options depending on how important this MySQL database is to you, how you intend to connect to it from outside, and what you want to do with it.
The naive implementation would involve opening your firewall and directing all incoming traffic using whatever port you have configured MySQL for to point to the ip address of your server. If you do this you absolutely must secure your database with a password!!! You'll also need to keep the server's public ip address handy so you know how to find it when you go out.
Use Amazon AWS, Google Compute, Google App Engine, or some other cloud platform to host a MySQL instance. All the big players also tend to host pretty awesome RDBMS solutions. The advantage here is that you're not exposing your home computer to malice and you are connecting into an ecosystem that will answer a lot of other questions for you as they come up along the way (IE - how do you ensure redundancy? Backups? Scale your network for traffic?). There's a ton of other advantages too. It's the cloud... dude...
Use a SaaS DB service such as Firebase (Note: We are leaving MySQL and SQL database territory with Firebase)
If you plan to let other parties access your MySQL instance to make use of your data, you might also want to consider implementing a REST API (or SOAP API if you hate the future) which acts as an abstraction layer to interact with and provide the data from your database in a consistent and reliable format.
Best answer I can give with the details afforded - look around though the options in this arena are near limitless depending on how and what you're trying to do.
You should be able to access your machine from your LAN pretty easily unless there is some firewall rules preventing opening connection to your machine. Another way is there are many cloud shosting providers has free tier you can signup to bring up a test instance of mysql. Example: Open Shift.
Lets say you have a complex system that consists of several programs and processes running on several servers.
Now you want to change some important configuration variable like the password to a database that is used by several of these processes.
Is there a standardized solution that enables you to store these configurations at a central spot and pushes changes on update to different servers?
I imagine something like environment variables but that can be read and changed fast during runtime.
You can use MQ s(short for message queues) for distributed application communications.
Rackspace has a short article about different MQ s. Check below:
Using Message Queues
Most of these applications use SSL and HTTPS so you can have a peace of mind about the channels being secure.
I am working on creating a spec for a startup to create a financial broker check website. It involves storing information about financial advisers and payment details of the users (so obviously needs a lot of security). What kind of databases are best suited for the application. Is MySQL or its open source variations enough or is it better to go with Oracle Enterprise etc. Also any info about the usefulness of application servers over traditional web servers (cloud based or normal) in this scenario and the preferred scripting language (PHP, Ruby, Python) for secure web applications.
Your choice of language, database, etc. has a relatively small impact on the security of your application. The developer's understanding of how to write secure code and the developer's understanding of the features provided by their tools is far more important. It is entirely possible to write a secure application on an open source LAMP stack. It is entirely possible to write a secure application on a completely closed source stack. It is also very easy to write insecure applications on any stack.
An enterprise database like Oracle will (depending on the edition, the options that are licensed, and the add-ons that are purchased) provide a host of security functions that may be useful. You can transparently encrypt the data at rest, you can encrypt the data when it flows over the network to the app server, you can prevent the DBA from viewing sensitive data, you can audit the actions of the DBA and other users, etc. But these sorts of things really only come into play when you've written a reasonably secure application to begin with. It does you little good to encrypt all the data if your application is vulnerable to SQL injection attacks and can be easily hacked to present all the decrypted data to the attacker, for example.
Let's say that you have a standalone application (a Java application in my case) and that this application has a configuration file (a XML file in my case) where you store the credentials (user and password) for a bunch of databases you need to connect.
Everything works great, but now you discover (or your are given a new requirement like me) that you have to put this application in a different server and that you can't have these credentials in the configuration files because of security and/or compliance considerations.
I'm considering to use data sources hosted in the application server (a WAS server), but I think this could have poor performance and maybe it's not the best approach since I'm connecting from a standalone application.
I was also considering to use some sort of encryption, but I would like to keep things as simple as possible.
How would you handle this case? Where would you put these credentials or protect them from being compromised? Or how would you connect to your databases in this scenario?
I was also considering to use some
sort of encryption, but I would like
to keep things as simple as possible.
Take a look at the Java Cryptography Architecture - Password Based Encryption. The concept is fairly straight forward, you encrypt/decrypt the XML stream with a key derived from a user password prior to (de)serializing the file.
I'm only guessing at what your security/compliance considerations require, but definitely some things to consider:
Require strong passwords.
Try to minimize the amount of time that you leave the sensitive material decrypted.
At runtime, handle sensitive material carefully - don't leave it exposed in a global object; instead, try to reduce the scope of sensitive material as much as possible. For example, encapsulate all decrypted data as private in a single class.
Think about how you should handle the case where the password to the configuration file is lost. Perhaps its simple in that you can just create a new config file?
Require both a strong password and a user keyfile to access the configuration file. That would leave it up to the user to store the keyfile safely; and if either piece of information is accidentally exposed, it's still useless without both.
While this is probably overkill, I highly recommend taking a look at Applied Cryptography by Bruce Schneier. It provides a great look into the realm of crypto.
if your standalone application runs in a large business or enterprise, it's likely that they're using the Lightweight Directory Access Protocol, or LDAP, for their passwords.
You might want to consider using an LDAP, or providing hooks in your application for a corporate LDAP.
I'm considering to use data sources hosted in the application server (a WAS server), but I think this could have poor performance and maybe it's not the best approach since I'm connecting from a standalone application.
In contrary, those datasources are usually connection pooled datasources and it should just enhance DB connecting performance since connecting is per saldo the most expensive task.
Have you tested/benchmarked it?