I am creating a SOA application and I would like to separate out logic in multiple OpenShift applications. The problem is, that some of the applications aren't supposed to be accessible from the internet -- only private services are going to use them.
I would like to allow access to those applications only from specific domains (lets say appB-domain.rhcloud.com and appC-domain.rhcloud.com). Is it possible to do that somehow on OS/OpenShift (like firewall rules and I haven't found about them in the docs)? Or must I do that in application level?
Thanks
You need to do that at the application level. There are no firewall rules that you can change, for instance, you can not update iptables.
Related
I have the gist of how to connect to a MySQL server, however my dilemma is using passwords. Here are some of the things I am looking at.
Architecture will be 1 core service which as of right now will be set up as a digest authentication service. Note: In the future I will also have it set up for kerberos authentication.
The service will have a schema it will need to be able to access in MySQL. Also the micro services will have their own schemas that they will also need to be able to access.
The database will be localhost initially but will eventually be moved (in production) to a separate server altogether.
Given the requirements above, I cannot give the services users that are restricted to localhost and have no password associated with them (nor would I want that in the event the server was hacked). So how can I have access to the database without using any plain text passwords (I don't want it stored in the code)?
Maybe I am just not understanding something here that could make my life so much easier so again I look towards the wisdom of the many here. Thanks in advance!
Some things that I should maybe mention: I plan on using go-martini as my http router, I'd like to be able to set up OAuth Provider, I will need to manage user sessions and authentication (right now not as important as I'm trying to get the core part of the service setup)
Edit: To clarify some information;
I do not have an AD, kerberos, or any other LDAP service to use and would be hard pressed to set them up at this time in a VM I use for development.
The service should not be dependent on any of those items as SSO is a much later requirement in this project.
Strictly speaking it will be deployed in environments where there are none of those available and this is non-negotiable.
I also am specifically developing the services in Go and the clients in React.
Note: I do not need someone to correct MY question. I would appreciate it if you do not change the context of my question to suite the answer you wish to give me. That is not what StackOverflow is about, it is also quite rude to do that. Thank you.
Is it possible to make an integration between Alfresco and LDAP to manage groups, users and permissions?
I mean, alfresco groups must be managed with its own set of permissions? Currently I have a LDAP repository to allow authentication, but it is a lot of work to maintain users and groups across multiple systems.
In other words, can i make a full integration between these two environments easily and without modifying the core of alfresco?
Thanks in advance
Short answer is no.
IMHO, externally managed users, groups and authentication are already the maximum to make sense of. Even then, a part of authorities (users / groups) will still be created locally in alfreco when you start using share sites and invite external people. Sure, this could technically be changed, but alfresco writing to LDAP opens a new can of worms. The default LDAP read/sync approach should not cause sigificant extra efforts.
Authorization data, such als roles (which can easily be confused with groups) and permissions and their semantics are highly dependent on application (alfresco). It does not make sense to manage them in an external system that has no clue.
I am working on an application which acts as a setup box for other child applications. I want to set up child applications from one central parent application. Set up includes database setup (db:create and db:migrate), subdomain set up etc for child apps.
This is going to work like this: a Subscriber will subscribe many applications. On subscription the application will be configured to work on subscribers provided subdomain (on my site). Every instance of a subscribed application will have its own database. So I need to set up database for each subscriber, and domain name too.
Currently I am creating database based on child application subdomain, using ActiveRecord::Base.connection.execute.
After creation of the database I want to load the schema of the child app to the database created. For this I had posted a question here
schema.sql not creating even after setting schema_format = :sql
Is there any good efficient method/approach that will help me?
Also I am a bit confused about subdomaining how its gonna be work?
Any help/thought appreciated...
Thanks,
Pravin
Since there is no real need for a separate database for each user and for each 'app', you may want to check out a term called multi tenant.
Also, subdomains can be handled in rails 3 and use something called Devise for User authentication. Github has a rails 3 sudomain devise authentication fork to get you started.
Until you really see a need for all these databases, keep it simple. One database per application, and connect to each application via Active Resource.
Be warned, that what you are undertaking can confuse even a hardened app builder, so i hope your experience begets that of which your current Stackoverflow rate is at.
All the best.
I've got a Windows Azure project I'm working on. It has two web roles - one is a public-facing site, and the second is an administration site for my customer to make changes to the database etc.
I had expected to be able to use a subdomain for each role - so for example have mysite.com and admin.mysite.com (obviously CNAME-mapped to the .cloudapp.net DNS name). However it looks like Azure doesn't do this, and instead has one subdomain (mysite.com) with different ports for each web role. So, for example, I would have mysite.com:80 for the main public site, and mysite.com:8080 for the administration.
Is this correct? Is there no way I can have subdomains for particular web roles?
Thanks in advance
John
This is correct. You can, of course, respond to both subdomains in a single role. But multiple web roles in Windows Azure correspond to multiple ports on the same virtual IP address.
I want to build a web based front-end to manage/administer my Linux box. E.g. I want to be able to add users, manage the file system and all those sorts of things. Think of it as a cPanel clone but more for system admin rather that web admin.
I was thinking about creating a service that runs on my box and that performs all the system levels tasks. This way I can have a clear separation between my web based front-end and the actual logic. The server pages can than make calls to my specialized server or queue tasks that way. However, I'm not sure if this would be the best way to go about this.
I guess another important question would be, how I would deal with security when building something like this?
PS: This just as a pet project and learning experience so I'm not interested in existing solutions that do a similar thing.
Have the specialized service daemon running as a distinct user -- let's call it 'managerd'. Set up your /etc/sudoers file so that 'managerd' can execute the various commands you want it to be able to run, as root, without a password.
Have the web server drop "trigger" files containing the commands to run in a directory that is mode '770' with a group that only the web server user and 'managerd' are members of. Make sure that 'managerd' verifies that the files have the correct ownership before executing the command.
Make sure that the web interface side is locked down -- run it over HTTPS only, require authentication, and if all possible, put in IP-specific ACLs, so that you can only access it from known locations, in advance.
Your solution seems like a very sensible solution to the 'root' issue.
Couple of suggestions:
Binding the 'specialised service' to localhost as well would help to guarantee that requests can't be made externally.
Checking request call functions that perform the actions and not directly give the service full unrestricted access. So calling a function "addToGroup(user,group)" instead of a generic "performAction(command)".