newbie for sawtooth here. Currently I am working on a POC for supply chain by using sawtooth as their record storage and there is one use case that i am still trying to sort it out. Here is the usecase, let say there is company A and company B, both companies will upload document to the supply chain system and the file url will store into sawtooth. In this case, may i know what is the best design for the permission control? As those documents able to view by all company A and company B staff. thanks!
Once the file URL is stored in the chain, anyone with access to the node will be able to view it. Only allowing certain people to read from the sawtooth chain can be done, similar to how traditional access restriction is given for APIs or servers. You can put the sawtooth system, behind a proxy and an API and give permissions for the access. That is one way to do it.
Related
I have 2 systems let's call them i and j. Each have it's own database.
Each have a registration page, where a user is inserted in a user table.
What is the best way to synchronize both tables, where if any user registers at system i it will be also registered at system j.
Notes:
I cannot read from each other databases directly.
I can do small changes in the code if needed and it will not affect the system performance or natural behavior.
I can create API's for both systems if needed.
I can add any tables or fields if needed.
I can create any cron jobs unless it will affect the performance of the system or server.
I'm using cPanel.
Technologies:
MySQL
PHP
REST API's
The fact that you list cpanel as a technology shows you're working with an inflexible budget hosting vendor. So it's unlikely they'll cooperate in setting up background tasks (cron jobs) to merge your user tables behind the scenes. (cpanel isn't a technology: it's a system administration user interface provided by hosting vendors who don't trust their customers' skills.)
So. you should design and implement a REST API in the code of both your apps to perform user registration and authentication tasks. You didn't show us the details of your app, so it's hard to design it for you. Still it seems likely you'll have to implement these operations:
PUT user
DELETE user
GET user
POST user to validate a user's password, etc. (Don't use GET to pass secret information: GET request parameters go into server logs.)
PATCH to update details of a user.
If you get the API working, whenever you create/retrieve/update/delete user information in one app, you'll use the API to change it in the other.
Your best bet would be to create a third app just for user management, and have both your existing apps use it. That way you're sure to have one coherent source of truth about users. But you can do it just within two apps.
So I have the website that provides the API for logging in/registering/etc. And I have the MediaWiki set up at my server.
I need to disallow MediaWiki registration and only allow logging people in through our API. So, when the user tries to login, no request to MediaWiki db for the user should be done, instead the request to our API should be done, logging person in if our API returned the correct data and displaying an error if it didn't.
Is there a way to get it done with MediaWiki?
Thanks in advance.
Your question is very broad and involves some development but also a lot of configuration as well. So, let's start:
First of all, you need to somehow integrate with the API you mentioned, which is possible by developing your own primary authentication provider. See the high level documentation. In this, you will have all the necessary entry points a user might hit when logging in or registering a new account and you can "translate" them to the actual actions which you need to do in your API (which you do not mention what it is or provides, so we can not give you better guidance here).
The second step would then be to configure this new authentication provider as the only one using $wgAuthManagerConfig which will in fact disable all other ways of creating other accounts as well as logging in with other accounts then the ones provided from your API.
If you've more questions, I would suggest that you provide more information and specific points where you're stuck :)
You can have a look at Extension:Auth remoteuser, it could fit at least partly you needs.
I am creating a mobile application using swift for my organization. The application reads in data in JSON format to populate the information that gets displayed on the application. I already have a method to generate the JSON files, but I need somewhere to host the actual files. I have an AWS account and an instance running, this is where I initially was hosting my JSON files but I got an email from AWS saying that having the app constantly grab the JSON files that I stored on the site resembled scanning behaviour, which is not allowed apparently. So I was wondering where I could host JSON files so that my mobile app can read in the information it needs. The biggest thing that I need is that I can host it with a static URL that I can keep calling with my app.
I was thinking of potentially putting the files on an AWS bucket with read permissions and having those get accessed, but since AWS already complained about me doing something like that I'm iffy. I was also thinking of putting the JSON files on Github, but again I'd hate to get an email from github telling me that they don't like that an application keeps grabbing the data.
For background, the app essentially has a hardcoded URL that grabs the JSON data and parses it. I didn't do an api because an API takes some time to grab all the information that doesn't really change that often, it's much easier to generate the JSON files locally and just post them online somewhere. The information on it can be read by anyone too it's not private or anything.
Message from AWS:
Hello,
We've received a report(s) that your AWS resource(s)
information
has been implicated in activity which resembles scanning remote hosts on the internet for security vulnerabilities. Activity of this nature is forbidden in the AWS Acceptable Use Policy (https://aws.amazon.com/aup/). We've included the original report below for your review.
Please take action to stop the reported activity and reply directly to this email with details of the corrective actions you have taken. If you do not consider the activity described in these reports to be abusive, please reply to this email with details of your use case.
If you're unaware of this activity, it's possible that your environment has been compromised by an external attacker, or a vulnerability is allowing your machine to be used in a way that it was not intended.
We are unable to assist you with troubleshooting or technical inquiries. However, for guidance on securing your instance, we recommend reviewing the following resources:
I'm new so it won't let me post links but they attached a couple help links
If you require further assistance with this matter, you can take advantage of our developer forums:
more links I can't have
Or, if you are subscribed to a Premium Support package, you may reach out for one-on-one assistance here:
link
Please remember that you are responsible for ensuring that your instances and all applications are properly secured. If you require any further information to assist you in identifying or rectifying this issue, please let us know in a direct reply to this message.
Regards,
AWS Abuse
Abuse Case Number:
Using an AWS EC2 instance to host static files (which is what it sounds like you were doing?) is pretty standard and I suspect that this is not what Amazon is complaining about. More likely, your instance has been infected by some sort of software which is causing it to request many files from other random servers on the web ("scanning for remote vulnerabilities"). You should check that you have not accidentally publicly posted your AWS credentials (in any form), and consider wiping the instance and resetting it. And of course reply to the email explaining this to AWS.
Backstory
I work for a company that has an online site that allows user to text personal information for collection. We collect the data, and make it available online. Users can choose to share the data with other users.
Going Forward
At some point, this may become classified an FDA-governed medical tool. In anticipation, we'd like to have in place a logging system that shows each time someone accesses our users' data, whether it be the user themselves, another authorized user, or a support person.
Current Architecture
We are currently running Ruby/Rails, and using a MySQL database. The personal information is encrypted in the database.
Data Access for Support
Today, support personnel can access data one of three ways:
admin site The admin site is limited to whatever screens we develop. While we don't currently, we could easily add logging to keep an audit trail of who accessed which data using the admin tool.
sql client I use MySQLWorkbench to access production. However, when connected this way, all personal information (user name, cell number, etc), is encrypted.
Ruby Rails console - Finally, support can log into one of the production boxes and use the Ruby/Rails console from command line. Ruby will decrypt the data, so we can do some simple things such as
u=User.find_all_by_state('active')
and it will return the recordset of all users with state='active', and decrypt their personal information in the resultset.
Holy Grail
logging
easy access for support
I'd love to be have a way to allow easy support access (once authenticated) to the data, but would log everything that is accessed (read or updated). That way, if I'm checking out my buddy's ex-wife's data for example, it gets logged to a place where I can't get in and clean it the audit trail. (See Google firing Gmail employee for an example of employees breaching the data policies).
Anyone have ideas, thoughts, experiences, suggestions with this issue?
hey devguy. This was a issue for me a couple months back. We ended up centralizing our mysql queires so that we could start to track all information coming in and out. Unfortunately the class I wrote is in PHP but the idea behind it could make it very easy to start logging.
https://code.google.com/p/php-centralized-mysql-controller/
Try stored procedures. Make all code use the stored procedures for CRUD activities. This defines an API that your developers can use while business rules are global enforced (don't return entire SSN values, but only last 4 digits, etc).
This serves as the basis for an external API as well.
If you want logging/auditing, you put it in the procedure.
This protects you from everyone except the DBAs.
I want to build a web based front-end to manage/administer my Linux box. E.g. I want to be able to add users, manage the file system and all those sorts of things. Think of it as a cPanel clone but more for system admin rather that web admin.
I was thinking about creating a service that runs on my box and that performs all the system levels tasks. This way I can have a clear separation between my web based front-end and the actual logic. The server pages can than make calls to my specialized server or queue tasks that way. However, I'm not sure if this would be the best way to go about this.
I guess another important question would be, how I would deal with security when building something like this?
PS: This just as a pet project and learning experience so I'm not interested in existing solutions that do a similar thing.
Have the specialized service daemon running as a distinct user -- let's call it 'managerd'. Set up your /etc/sudoers file so that 'managerd' can execute the various commands you want it to be able to run, as root, without a password.
Have the web server drop "trigger" files containing the commands to run in a directory that is mode '770' with a group that only the web server user and 'managerd' are members of. Make sure that 'managerd' verifies that the files have the correct ownership before executing the command.
Make sure that the web interface side is locked down -- run it over HTTPS only, require authentication, and if all possible, put in IP-specific ACLs, so that you can only access it from known locations, in advance.
Your solution seems like a very sensible solution to the 'root' issue.
Couple of suggestions:
Binding the 'specialised service' to localhost as well would help to guarantee that requests can't be made externally.
Checking request call functions that perform the actions and not directly give the service full unrestricted access. So calling a function "addToGroup(user,group)" instead of a generic "performAction(command)".