How do I allow any host to submit jobs in Oracle/Sun Grid Engine? - sungridengine

We have an internal network devoted to development and testing, and this network has an OGE cluster on it. I'd like to allow any machine on that network to submit jobs, without having to add them manually one by one as submit hosts. I've tried doing a wildcard, but it hasn't liked my syntax. Is there any way to do this?
Thanks!

A qualified "no" - if you really need this, consider automating it instead.
GridEngine does not support wildcarding in host names. GE relies heavily on forward and reverse name resolution for pretty much all host interactions. You are not going to get a GridEngine cluster to blindly accept job submissions from any unspecified machine on your subnet without some bad magic.
If you use a configuration management system like Puppet or Chef, that might be the best layer to define whether a server is a submit host or not.
The alternative, brute force way(this will almost certainly violate your IT department's acceptable use policy) is using something like nmap to produce a list of hostnames on your network(if you think you can get away with it) and write a simple shell script to add each one as a submit host. This approach would require minor ongoing maintenance as the hosts on your network change over time, etc.

Related

Host a mySQL Server

I am making a Javafx program and need to use a small mySQL database. Currently I am hosting one on my computer but I can't access it on other computers on other networks. I need the mySQL server to be accessible from anywhere. How do I host one that does that? Thanks in advance, all help is welcome.
Well you have a few options depending on how important this MySQL database is to you, how you intend to connect to it from outside, and what you want to do with it.
The naive implementation would involve opening your firewall and directing all incoming traffic using whatever port you have configured MySQL for to point to the ip address of your server. If you do this you absolutely must secure your database with a password!!! You'll also need to keep the server's public ip address handy so you know how to find it when you go out.
Use Amazon AWS, Google Compute, Google App Engine, or some other cloud platform to host a MySQL instance. All the big players also tend to host pretty awesome RDBMS solutions. The advantage here is that you're not exposing your home computer to malice and you are connecting into an ecosystem that will answer a lot of other questions for you as they come up along the way (IE - how do you ensure redundancy? Backups? Scale your network for traffic?). There's a ton of other advantages too. It's the cloud... dude...
Use a SaaS DB service such as Firebase (Note: We are leaving MySQL and SQL database territory with Firebase)
If you plan to let other parties access your MySQL instance to make use of your data, you might also want to consider implementing a REST API (or SOAP API if you hate the future) which acts as an abstraction layer to interact with and provide the data from your database in a consistent and reliable format.
Best answer I can give with the details afforded - look around though the options in this arena are near limitless depending on how and what you're trying to do.
You should be able to access your machine from your LAN pretty easily unless there is some firewall rules preventing opening connection to your machine. Another way is there are many cloud shosting providers has free tier you can signup to bring up a test instance of mysql. Example: Open Shift.

Using Chef/Puppet and managing hand-made changes

I'm running a complex server setup for a defacto high-availability service. So far it takes me about two days to set everything up so I would like to automate the provisioning.
However I do a quite a lot of manual changes to (running) server(s). A typical example is changing a firewall configuration to cope with various hacking attempts, packet floods etc. Being able to work on active nodes quickly is important. Also the server maintains a lot of active TCP connections and loosing those for a simple config change is out of question.
I don't understand if either Chef or Puppet is designed to deal with this. Once I change some system config, I would like to store it somewhere and use it while the next instance is being provisioned. Should I stick with one of those tools or choose a different one?
Hand made changes and provisioning don't take hands. They don't even drink tea together.
At work we use puppet to manage all arquitecture, and as you we need to do hand made changes in a hurry due to performance bottlenecks, attacks, etc.
What we do is first make sure puppet is able to setup every single part of the arquitecture ready to be delivered without any specific tuning.
Then when we need to do hand made changes, if in a hurry as long you don't mess with files managed by puppet there's no risk, if it's a puppet managed file what we need to change then we just stop puppet agent and do whatever we need.
After hurry ended, we proceed as follows:
These changes should be applied to all servers with same symptoms ?
If so, then you can develop what puppet call 'facts' which is code that it's run on the agent on each run and save results in variables available in all your puppet modules, so if for example you changed ip conntrack max value because a firewall was not able to deal with all connections, you could easily (ten lines of code) have in puppet on each run a variable with current conntrack count value, and so tell puppet to set a max value related to current usage. Then all other servers will benefit for this tunning and likely you won't ever have to deal with conntrack issues anymore (as long you keep running puppet with a short frequency which is the default)
These changes should be always applied by hand on given emergencies?
If configuration is managed by puppet, find a way to make configuration include other file and tell puppet to ignore it. This is the easiest way, however it's not always possible (e.g. /etc/network/interfaces does not support includes). If it's not possible, then you will have to stop puppet agent during emergencies to be able to change puppet files without risk of being removed on next puppet run.
Are this changes only for this host and no other host will ever need it?
Add it to puppet anyway! Place a sweet if $fqdn == my.very.specific.host and put inside whatever you need. Even for a single case it's always beneficial (and time consuming) to migrate all changes you do to a server, as will allow you to do a full restore of server setup if for some reason your server crash to a not recoverable state (e.g. hardware issues)
In summary:
For me the trick in dealing with hand made changes it's putting a lot of effort in reasoning how you decided to do the change and after emergency is over move that logic into puppet. If you felt something was wrong because for a given software slots all were used but free memory was still available on the server so to deal with the traffic peak was reasonable to allow more slots to be run, then spend some time moving that logic into puppet. Very carefully of course, and as time consuming as the amount of different scenarios on your architecture you want to test it against, but at the end it's very, VERY rewarding.
I would like to complete Valor's excellent answer.
puppet is a tool to enforce a configuration. So you must think of it this way :
on the machine I run puppet onto...
I ask puppet client...
to ensure that the config of the current machine...
is as specified in the puppet config...
which is taken from a puppet server, or directly from a bunch of puppet files (easier)
So to answer one of your questions, puppet doesn't require a machine or a service reboot. But if a change in a config file you set with puppet requires a reboot of the corresponding service/daemon/app, then there is no way to avoid it. There are method in puppet to tell that a service needs to be relaunched in case of config change. Of course, puppet will not relaunch the service if it sees that nothing changed.
Valor is assuming you use puppet in client/server way, with (for example) puppet clients polling a puppet server for config every hours. But it is also possible to move your puppet files from machine to machine, for example with git, and launch puppet manually. This way is :
far simpler than the client/server technique (authentication is a headache)
only enforce config change when you explicitely ask for it, thus avoiding any overwrite of your handmade changes
This is obviously not the best way to use puppet if you manage a lot of machines, but it may be a good start or a good transition.
And also, puppet is very hard to learn at an interesting level. It took me 2 weeks to be able to automatically install an AWS server from scratch. I don't regret it, but you may want to know that fact if you must convince a boss to allocate you time.

How do you handle passwords or credentials for standalone applications?

Let's say that you have a standalone application (a Java application in my case) and that this application has a configuration file (a XML file in my case) where you store the credentials (user and password) for a bunch of databases you need to connect.
Everything works great, but now you discover (or your are given a new requirement like me) that you have to put this application in a different server and that you can't have these credentials in the configuration files because of security and/or compliance considerations.
I'm considering to use data sources hosted in the application server (a WAS server), but I think this could have poor performance and maybe it's not the best approach since I'm connecting from a standalone application.
I was also considering to use some sort of encryption, but I would like to keep things as simple as possible.
How would you handle this case? Where would you put these credentials or protect them from being compromised? Or how would you connect to your databases in this scenario?
I was also considering to use some
sort of encryption, but I would like
to keep things as simple as possible.
Take a look at the Java Cryptography Architecture - Password Based Encryption. The concept is fairly straight forward, you encrypt/decrypt the XML stream with a key derived from a user password prior to (de)serializing the file.
I'm only guessing at what your security/compliance considerations require, but definitely some things to consider:
Require strong passwords.
Try to minimize the amount of time that you leave the sensitive material decrypted.
At runtime, handle sensitive material carefully - don't leave it exposed in a global object; instead, try to reduce the scope of sensitive material as much as possible. For example, encapsulate all decrypted data as private in a single class.
Think about how you should handle the case where the password to the configuration file is lost. Perhaps its simple in that you can just create a new config file?
Require both a strong password and a user keyfile to access the configuration file. That would leave it up to the user to store the keyfile safely; and if either piece of information is accidentally exposed, it's still useless without both.
While this is probably overkill, I highly recommend taking a look at Applied Cryptography by Bruce Schneier. It provides a great look into the realm of crypto.
if your standalone application runs in a large business or enterprise, it's likely that they're using the Lightweight Directory Access Protocol, or LDAP, for their passwords.
You might want to consider using an LDAP, or providing hooks in your application for a corporate LDAP.
I'm considering to use data sources hosted in the application server (a WAS server), but I think this could have poor performance and maybe it's not the best approach since I'm connecting from a standalone application.
In contrary, those datasources are usually connection pooled datasources and it should just enhance DB connecting performance since connecting is per saldo the most expensive task.
Have you tested/benchmarked it?

Should I move client configuration data to the server?

I have a client software program used to launch alarms through a central server. At first it stored configuration data in registry entries, now in a configuration XML file. This configuration information consists of Alarm number, alarm group, hotkey combinations, and such.
This client connects to a server using a TCP socket, which it uses to communicate this configuration to the server. In the next generation of this program, I'm considering moving all configuration information to the server, which stores all of its information in a SQL database.
I envision using some form of web interface to communicate with the server and setup the clients, rather than the current method, which is to either configure the client software on the machine through a control panel, or on install to ether push out an xml file, or pass command line parameters to the MSI. I'm thinking now the only information I would want to specify on install would be the path to the server. Each workstation would be identified by computer name, and configured through the server.
Are there any problems or potential drawbacks of this approach? The main goal is to centralize configuration and make it easier to make changes later, because our software is usually managed by one or two people at most.
Other than allowing for the client to function offline (if such a possibility makes sense for your application), there doesn't appear to be any drawback of moving the configuration to a centralized location. Indeed even with a centralized location, a feature can be added in the client to cache the last known configuration, for use when the client is offline).
In case you implement a [centralized] database design, I suggest to consider storing configuration parameters in an Entity-Attribute-Value (EAV) structure as this schema is particularly well suited for parameters. In particular it allows easy addition and removal of particular parameters and also the handling parameters as a list (paving the way for a list-oriented display as well in the UI, and therefore no changes needed in the UI either when new types of parameters are introduced).
Another reason why configuartion parameter collections and EAV schemas work well together is that even with very many users and configuration points, the configuration data remains small enough that is doesn't suffer some of the limitations of EAV with "big" tables.
Only thing that comes to mind is security of the information. In either case you probably have that issue though. Probably be easier to interface with though with a database as everything would be in one spot.

Keep database information secure

there's this interesting problem i can not solve myself. I will be very glad, if you help me.
Here's it:
there are many client applications that send data records to one MySQL server.
Few data records are not very important, but the whole database is. (You can imagine it is facebook DB :) )
Is there any way to ensure that
data from DB won't be used by anyone but true owner
DB will preserve essential features such as sorting etc.
assuming that attacker can mysteriously gain full access to server?
You can't simply encrypt data client-side and store it encrypted, since client application is wide-spread and attacker can get key from it.
Maybe adding some layers between application and DB, or combining encryption methods client- and server-side (using mysql built-in methods) will help?
As long as the database needs to start up and run unattended you can't hide the keys from a compromised root account (= 'mysterious full access'). Anywhere the database could possibly store the master key(s), the root will also have access. No amount of business layers or combination of client-server encryption will ever circumvent this simple fact. You can obfuscate it till the day after but if the prize is worth then root can get it.
One alternative is to require a manually assisted start up process, ie. a human enters the master key password during the server boot (or hardware module PIN), but this is extremely hard to maintain in real world, it requires a highly trusted employee to be on pager call to log in and start the database whenever there is downtime.
Solutions like TPM offer protection against physical loss of the server, but not against a compromised root.
Your root is as important as the database master key(s), so you must protect your root with the same care as the keys. This means setting up operating procedures, screening who has access to root, rotating the root password and so on and so forth. The moment someone gains 'mysteriously full access' the game is pretty much lost.
I pretty much agree with Remus Rusanu's answer.
Maintaining good security is hard, but you can always pay attention to what you do. When ever you access sensitive information carefully verify your query and make sure it cannot be spoofed or exploited to gain access to information which shouldn't be accessible by given client.
If you can roll out physical access to the box by the attacker then there are several things you can do to harden your security. First of all I'd configure ssh access only to only allow connections from specific IP or IP range (and of course no root access). You can also do that that on your firewall. This would mean that the weakest link is your server (the application which receives data/requests from clients, could be web-server and whatever scripts you use). Now you "just" have to make sure that no one can exploit your server. There are a lot more things you could do to harden your system, but it think it would be more appropriate to ask on ServerFault.
If you're worried about physical access to the PC, there isn't really much you can do and most stuff has already been mentioned in Remus answer.
There's also another option. This is by far the most ineffective method from speed and ease to develop viewpoint, but it would partly protect you from any kind of an attack on your server (including physical). It's actually quite simple, but a bit hard to implement - only store the encrypted data in the database and handle all encryption/decryption client-side using javascript or flash. Only the client will have the key and data will always be transfered over the wire and stored in encrypted format. The biggest drawback is that once client forgets the key there's no way back, the data is inaccessible.
Of course it's all matter of time, money and effort - with enough of these anything can be broken.
I've no idea if such a thing exists in MySql, but row-level-versioning in Oracle enables you to define access rights on row-level IN the database: so that means, regardless of what tool is being used to access the data, the user only ever sees the same selection as determined by his/her credentials.
So if my username/role is only allowed to see data limited by some WHERE clause, that can appended to each and every SELECT that appears in the database, regardless of whether it comes from a web app, a SQL querying tool, or whatever.
I will use a 2nd layer and a firwall between them.
so you have firewall ---- web server --- firewall -- 2nd layer server --- firewll --- db
it will be wise to use different platfroms between layers, it all depends how important is the data.
anyway - the web server should have no access to DB.
about preserving sort - if you use a file encrypotion mechisim - it will only protect you from Hard drive theaft.
if you encrypt the data it self, and if you do it smartly (storing the keys in a separate place) you will not loose sorting as you will look for the encryoted entry and not the real one- but now you have another thing to protect....