I've been looking for info on how/if Forge encrypts data at rest. We have some customers with sensitive models that are asking the question.
Is data at rest encrytped?
If so, what method of encrypted is used and is it on by default?
If not, is this a planned feature in the future?
The Forge REST API is using https which means you are using the SSL protocol to transfer data between the client and server (both way). SSL encrypts the data for you automatically using the 'trusted' certificate. Here is a complete article on the protocol if you interested reading more about it.
Edited based on comments below - if we are talking about storage, all the data stored on the Forge servers are encrypted with your developer keys. Forge encrypts your data at the object level as it writes it to disks and decrypts it for you when you access it.
Related
I've edited this question. I hope this version is a bit more clear.
I am seeking to have a programmer build a process for me. I need to ensure what is recommended is a best practice for the below process.
Here are the steps I need to have built:
Have a https: webform on my server that submits client inputted data into a database on my server. The data is personal identifiable information and needs to be securely transmitted in the next step.
Once the data is loaded in my database, I need to transfer the data in an encrypted/Json format to a third-party server. The third-party will unencrypt the data, score it and send it back to my server encrypted.
While the data is being sent and scored by the third-party, the client will see a browser screen indicating processing...
Once the data is scored and sent back to my server, it will be unencrypted and it will update the client's browser with options based on the score given by the third-party.
Based on what I understand, I think an API on both my server and the third-party server might be best.
What is the best practice approach for the above process?
Below are some questions I have which would be very helpful for me to understand in your response.
Is the API approach the best?
What process is used by the third-party to unencrypt data I send and vice versa? How do I prevent others from unencrypting the data if it is intercepted?
3)While the data is being scored by the third-party, the client browser will show processing. From a web development standpoint how does this work? Also, from a web development standpoint, how exactly is the processing screen triggered to update with results on the client's browser screen when the data is sent back from the third-party?
The file that you will be transmitting, as you mentioned is encrypted so it will totally depend on the encryption algorithm you are using, generally encrypted data are stored as BASE64 or HEX so after encryption the data will be passed in the above-mentioned format.
To answer you second question on "how will the receiving website receive the file?", there are several ways you can do this:
You can share the backend database your website is using then it will just be a simple query away (by shared I mean both the websites use the same database).
Another way of achieving this is to use an API which can store your data and can be globally used in any application it is called at
Or you can set up a simple php server locally at your machine and send data between websites using the HTTP: GET or HTTP: POST requests.
also avoid using un-necessary tags like web-development-server or data-transfer or transmission etc. these tags are useless and unrelated to your question. You should only tag those which are related to your question, a simple tag for web-development would be enough.
also edit out your question to make us properly understand, what problems you are facing? what have you tried? what do you expect from us in the answer?
please clarify your question more.
Your concept of files being sent around is kind of wrong, because in most cases none of this is ever been written to disk, and so there is no JSON file with a file-name - and these are not directly being encrypted, but only pushed through an encrypted channel. Most commonly both sides either use HTTPS or WSS as the protocol, which encrypts / decrypts the data being exchanged transparently (all by itself). And depending on the protocol which is being used, this requires either a combination of client & server, server & server - or a P2P network - to be installed.
Further reading: Internetworking Basics - Computer and Information Science.
I have a full deployment of couchbase (server, sync gateway and lite) and have an API, mobile app and web app all using it.
It works very well, but I was wondering if there are any advantages to using the Sync Gateway API over the Couchbase SDK? Specifically I would like to know if Sync Gateway would handle larger numbers of operations better than the SDK, perhaps an internal queue/cache system, but can't seem to find definitive documentation for this.
At the moment the API uses the C# Couchbase SDK and we use SyncGateway very little (only really for synchronising the mobile app).
First, some relevant background info :
Every document that needs to be synced over to Couchbase Lite(CBL) clients needs to be processed by the Sync Gateway (SGW). This is true whether a doc is written via the SGW API or whether it comes in via server write (N1QL or SDK). The latter case is referred to as "import processing” wherein the document that is written to the bucket (via N1QL) is read by SGW via DCP feed. The document is then processed by SGW and written back to the bucket with the relevant sync metadata.
Prerequisite :
In order for the SGW to import documents written directly via N1QL/SDK, you must enable “shared bucket access” and import processing as discussed here
Non-mobile documents :
If you have documents that are never going to be synced to the CBL clients, then choice is obvious. Use server SDKs or N1QL
Mobile documents (docs to sync to CBL clients) :
Assuming you are on SGW 2.x syncing with CBL 2.x clients
If you have documents written at server end that need to be synced to CBL clients, then consider the following
Server side write rate:
If you are looking at writes on server side coming in at sustained rates significantly exceeding 1.5K/sec (lets say 5K/sec), then you should go the SGW API route. While it's easy enough to do a bulk update via server N1QL query, remember that SGW still needs to keep up and do the import processing (what's discussed in the background).
Which means, if you are doing high volume updates through the SDK/N1QL, then you will have to rate limit it so the SGW can keep up (do batched updates via SDK)
That said, it is important to consider the fact that if SGW can't keep up with the write throughput on the DCP feed, it's going to result in latency, no matter how the writes are happening (SGW API or N1QL)
If your sustained write rate on server isn’t excepted to be significantly high, then go with N1QL.
Deletes Handling:
Does not matter. Under shared-bucket-access, deletes coming in via SDK or SGW API will result in a tombstone. Read more about it here
SGW specific config :
Naturally, if you are dealing with SGW specific config, creating SGW users, roles, then you will use the SGW API for that.
Conflict Handling :
In 2.x, it does not matter. Conflicts are handled on CBL side.
Challenge with SGW API
Probably the biggest challenge in a real-world scenario is that using the SG API path means either storing information about SG revision IDs in the external system, or perform every mutation as a read-then-write (since we don't have a way to PUT a document without providing a revision ID)
The short answer is that for backend operations, Couchbase SDK is your choice, and will perform much better. Sync Gateway is meant to be used by Mobile clients, with few exceptions (*).
Bulk/Batch operations
In my performance tests using Java Couchbase SDK and bulk operations from AsyncBucket (link), I have updated up to 8 thousand documents per second. In .Net there you can do Batch operations too (link).
Sync Gateway also supports bulk operations, yet it is much slower because it relies on REST API and it requires you to provide a _rev from the previous version of each document you want to update. This will usually result in the backend having to do a GET before doing a PUT. Also, keep in mind that Sync Gateway is not a storage unit. It just works as a proxy to Couchbase, managing mobile client access to segments of data based on the channels registered for each user, and writes all of it's meta-data documents into the Couchbase Server bucket, including channel indexing, user register, document revisions and views.
Querying
Views are indexed thus for querying of large data they may will respond very fast. Whenever a document is changed, the map function of all views has the opportunity to map it. But when a view is created through Sync Gateway REST API, some code is added to your map function to handle user channels/permissions, making it slower than plain code created directly in Couchbase Admin UI. Querying views with compound keys using startKey/endKey parameters is very powerful when you have hierarchical data, but this functionality and the use of reduce function are not available for mobile clients.
N1QL can also be very fast too, when your N1QL query is taking advantage of Couchbase indexes.
Notes
(*) One exception to the rule is when you want to delete a document and have this reflected on mobile phones. The DELETE operation, leaves an empty document with _deleted: true attribute, and can only be done through Sync Gateway. Next time the mobile device synchronizes and finds this hint, it will delete the document from local storage. You can also use set this attribute through a PUT operation, when you may also adding _exp: "2019-12-12T00:00:00.000Z" attribute to perform a programmed purge of the document in a future date, so that the server also gets clean. However, just purging a document through Sync Gateway is equivalent to delete it through Couchbase SDK and this won't reflect on mobile devices.
NOTE: Prior to Sync Gateway 1.5 and Couchbase 5.0, all backend operations had to be done directly in Sync Gateway so that Sync Gateway and mobile clients could detect those changes. This has changed since shared_bucket_access option was introduced. More info here.
I am new in this field. My condition is, I have a Beckhoff PLC using Twincat3 software. I am using OPC UA to upload data to OPC UA server and then send data to the cloud (Azure SQL database) through Azure IoT Hub. I wanted to make a pub/sub communication. The next steps, I will analyze the data with power bi and display it on several power bi mobile with different types of information. The problem is I have a bit confusion about how Pub/Sub communication applied in this connection. I have read about MQTT and AMPQ but do I need to write a code to be able to apply Pub/Sub communication? Thanks!
Azure IoT Hub is a Pub/Sub service. You can subscribe multiple stream processors to the data that hits the hub, and each one will see the entire stream. These stream processors can be implemented in custom code, perhaps with an Azure Function, but also with Logic Apps or Azure Stream Analytics.
You can set up OPC UA servers at both the PLC and the cloud. Each can subscribe to objects on the other for two way exchange. Otherwise, make the OPCUA objects available on the PLC, the subscribe to then from your cloud service.
Of course you will need to enable all the necessary ports and handle certificate exchange.
If you are using the Beckhoff OPC UA server, you annotate the required variables / structs with attributes. See the documentation.
If you want to use MQTT, instead you will need to write some code, using the MQTT library for TwinCAT. You will also need to get your broker set up and again, handle security. There are decent examples for the main providers Inthe Beckhoff documentation for the MQTT library.
I was intending to release a website to the public that stored sensitive information on the client side using Local Storage such as API keys. Variables stored in Local Storage are used in my PHP scripts.
I was thinking, since it had an SSL certificate, this would suffice for storing sensitive information such as an API key and secret.
My website will not have ads. The website also has a MySQL database.
I am going to configure a general user for reading data in since a user does not need write privileges (it is a read-only site). The problem is if they went on a malicious website later on, they could extract these Local Storage keys (maybe with a script) and potentially hack my consumer.
The names are very generic on my website when creating and using the keys so it would be hard to identify the origin of the keys or what their purpose is.
Is this wrong to do this to my consumer?
Yes, it is wrong. It means a huge security leak. Imagine the case when any malicious Javascript is executed in the browser for any reason. It will be able to read the content of localStorage and send it to the hacker.
This could be caused by a website problem, such as possibility of XSS injection, but a browser extension with malicious content can achieve the same. While XSS injection can be protected against, if the developers of the site are careful, what browser extensions the users will install is beyond your control. Avoid this approach. Store sensitive data safely on the server.
I have been doing some reading on Azure Cosmos DB (the new Document DB) and noticed that it allows Azure functions to be executed when data is written to the db.
Typically I would have written to a service bus and then processed the message using an Azure function and storing the message in the document db for history.
I wanted some help on good practice for CosmoDB
It depends on your use case, your throughput requirement? what processing you will be doing on data? how transient your data is? will it be globally distributed etc.
Yes, Cosmos DB can ingest data with very high rate and storage can scale elastically too. Azure Functions are certainly a viable option to process the change feed in cosmos db.
Here is more information: https://learn.microsoft.com/en-us/azure/cosmos-db/serverless-computing-database