I am about to:
create a web interface where the user can type in email addresses, which are then sent up to the server in one json bulk, which is then used for messaging these users.
I also have the requirement to be able to upload a csv file with a long list of email addresses. The problem is that the number of email addresses can be very large. We're talking about in the thousands or even more.
Theoretically I can either parse the csv file in the front-end and send the email addresses up in a json object (as I already have the api for the first use case where the email addresses are typed in and sent as a json), or I could upload these csv files to our db and do their parsing on the server side.
Should consider processing the csv files in the front-end at all?
What should be a "safe" number of items for processing in the front-end without breaking anything, or ending up with a heavily compromised user experience?
Can anyone comment from experience? Thanks
What should be a "safe" number of items for processing in the
front-end without breaking something, or ending up with a heavily
compromised user experience?
This depends on the user's machine.
No one here would be able to give you a definitive answer on your question.
Anyway, you can use the Web Workers API
Web Workers allow you to create long running asynchronous threads in the background without heavily affecting/freezing your UI. You can show a spinner indicating that the CSV is being processed. Meanwhile your users would be able to interact with the UI just fine.
That's your best bet where supported.
Should consider processing the csv files in the front-end at all?
String parsing is usually a process that is optimised by modern browsers in some cases. If you move the computation to the server you need to scale your server to meet demands for the calculations as more and more users use your web-app.
You could get playful with it and detect the processing capabilities of the user's machine - if capable, use Web Workers, if not use the server to do this.
The most comprehensive way to do this is by defining a Browser Test Matrix and test for yourself.
You can even emulate bandwidths you want to target/test, specifically, using Network Throttling in Chrome Dev Tools
Related
I am creating a mobile application using swift for my organization. The application reads in data in JSON format to populate the information that gets displayed on the application. I already have a method to generate the JSON files, but I need somewhere to host the actual files. I have an AWS account and an instance running, this is where I initially was hosting my JSON files but I got an email from AWS saying that having the app constantly grab the JSON files that I stored on the site resembled scanning behaviour, which is not allowed apparently. So I was wondering where I could host JSON files so that my mobile app can read in the information it needs. The biggest thing that I need is that I can host it with a static URL that I can keep calling with my app.
I was thinking of potentially putting the files on an AWS bucket with read permissions and having those get accessed, but since AWS already complained about me doing something like that I'm iffy. I was also thinking of putting the JSON files on Github, but again I'd hate to get an email from github telling me that they don't like that an application keeps grabbing the data.
For background, the app essentially has a hardcoded URL that grabs the JSON data and parses it. I didn't do an api because an API takes some time to grab all the information that doesn't really change that often, it's much easier to generate the JSON files locally and just post them online somewhere. The information on it can be read by anyone too it's not private or anything.
Message from AWS:
Hello,
We've received a report(s) that your AWS resource(s)
information
has been implicated in activity which resembles scanning remote hosts on the internet for security vulnerabilities. Activity of this nature is forbidden in the AWS Acceptable Use Policy (https://aws.amazon.com/aup/). We've included the original report below for your review.
Please take action to stop the reported activity and reply directly to this email with details of the corrective actions you have taken. If you do not consider the activity described in these reports to be abusive, please reply to this email with details of your use case.
If you're unaware of this activity, it's possible that your environment has been compromised by an external attacker, or a vulnerability is allowing your machine to be used in a way that it was not intended.
We are unable to assist you with troubleshooting or technical inquiries. However, for guidance on securing your instance, we recommend reviewing the following resources:
I'm new so it won't let me post links but they attached a couple help links
If you require further assistance with this matter, you can take advantage of our developer forums:
more links I can't have
Or, if you are subscribed to a Premium Support package, you may reach out for one-on-one assistance here:
link
Please remember that you are responsible for ensuring that your instances and all applications are properly secured. If you require any further information to assist you in identifying or rectifying this issue, please let us know in a direct reply to this message.
Regards,
AWS Abuse
Abuse Case Number:
Using an AWS EC2 instance to host static files (which is what it sounds like you were doing?) is pretty standard and I suspect that this is not what Amazon is complaining about. More likely, your instance has been infected by some sort of software which is causing it to request many files from other random servers on the web ("scanning for remote vulnerabilities"). You should check that you have not accidentally publicly posted your AWS credentials (in any form), and consider wiping the instance and resetting it. And of course reply to the email explaining this to AWS.
I'm struggling to find a solution to what I thought would be a common requirement so I'm hoping someone can help me with some pointers on what to search for/areas to explore.
Background
I'm building an iOS mobile app. I'm storing data locally using realm.io. The app is preinstalled with a snapshot of the content of a Wordpress mySQL database (it uses custom types). The content of the WP database is only written via the Wordpress install, the mobile app cannot write data.
Objective
So, I want to be able to check for changes since a given date (whenever the local database was last updated) and send the changed records to the mobile app (via the wp JSON api?).
I think I can fetch "posts since a date" but I need a full list of all create, update and delete operations since a given date.
Since the app is read-only I thought this type of one-way sync would be pretty straight forward but I can't find a common solution.
Any ideas to point me in the right direction would be great. Obviously, if anyone has any experience of doing this sort if thing with realm.io then that would be amazing :-)
Realm doesn't support yet any sort of synchronization mechanism across different files. We have an issue about that though, but you're likely searching rather for a solution in the immediate future.
Update: Realm launched the Realm Mobile Platform. This offers synchronization functionalities and would greatly simplify the solution for this use case.
You could use e.g. the server-side Node.js binding to pull new data from your MySQL Wordpress installation and push them to a global Realm served by the Realm Object Server. This can be read-only synchronized from the mobile apps, which would automatically receive the deltas and provide updated data to your users.
Whatever mechanism you come up yourself though in the meantime, it would require that you have read-write access from your iOS application to the realm database, so that you can update it with new data.
Pushing changed records as you describe is likely not going to work.
Apple's Push Notification service (APNS), which is the only back communication channel that works when your app is in the background or suspended, allows you to send very small payloads. You would use that to signalize your iOS app, that something changed on the server-side and there is new data to load. You would then initiate a request to a JSON-based API, wait for the response, map the returned JSON to Realm objects and store them in your database.
You want probably read more in the "Downloading Content in the Background" section of background execution chapter in the official App Programming Guides for iOS.
While pre-seeding the database from the app bundle seems like a nice idea, because the user wouldn't need to wait initially after downloading the app, that will enlarge the app itself with data, which might become in the future completely irrelevant.
I was wondering whether is that possible to have xlsm file as backend while having html as frontend? How can I achieve this if yes?
Thanks in advance.
Since the question lacks the understanding of an application structure in the programming realm, I will put this as an answer hoping to clarify a few things.
First of all I don't think you understand what the term "back-end" means.
Please read https://en.wikipedia.org/wiki/Front_and_back_ends AND http://blog.teamtreehouse.com/i-dont-speak-your-language-frontend-vs-backend
hopefully these will clarify a few thing for you.
Just to explain these concepts shortly:
In an application Front-end and back-end refer to two interfaces that communicate with each other and exchange data in some form. Such separation is made when the program and the user are separate (such as when you have a server and a client such as in distributed programming). This however, is only one of many programming patterns today. Although rare in today's world, there are programs that do not separate functionality in such way and thus delegate all this functionality to the core program that is statically installed on the clients computer. But in other cases here is what the terms front-end vs. Back-end means:
Reason why such separation is necessary:
In today's world many applications (such as web applications and mobile applications) are deployed on common servers to provide wider and faster access, better support and to reduce the cost of access for the client (not requiring any space, no download time etc.). However in such cases, since the client doesn't have access to the program locally, they need to access it over internet protocols such as TCP (which is used by today's http). The problem is that the frontend files are served everytime the application is loaded and can not keep track of states of data (they are stateless) [excluding the edge cases of cookies and caches]
Front End:
The sole reason that the front end exists for the user to interact with the application and to collect data from the user such as login information etc. (User Interface)
Back end:
Now back-end is a little bit more complicated. There are 2 major components to a good back-end design:
Logic
Data
The backend is responsible for processing the data from the user (front-end) in a correct and meaningful way. For example in a really simple program which adds two numbers the front end would be responsible of asking the user for two numbers and the back-end would carry out the actual addition and send the result back to front end to be displayed.
If the data has states. The backend would also need to save the last state of the data somewhere on the server. This is where the second component comes in. The most common practice is to have a ".db" file(s) which represents a database. However there is no obligation for you to do so. When necessary, if you wanted your backend could read data any where from a plain text file to STDIN.
Why do we use databases? ==> The queries. Query languages that come with data bases make it so much easier for us to extract and isolate the relevant data
After processing and modifying the data, the backend sends it back to the front end to be displayed to the user. The common data transferring ways are JSON, XML and SExpressions.
So following this short lecture, back to your question:
Can I have an xlsm file in the backend?
Yes. You can preserve the data in the backend(server) in anyway that you want. The only thing you need to make sure is that the endpoint the front end communicates to reads data from this file and writes back onto this file. (Sometimes CSV files are used in such a way that is similar to xlsm files)
Is it a good idea?
No. Databases exists for a reason. Use them.
Hope this sheds light on a few things. I highly advise you understand the application stack before writing any code
I want to setup a website (intranet in this particular case) that shows realtime updating data. I have the server and the realtime data, it's the software I know less about. I am no stranger to programming, but I am less familiar with web technologies.
Which alternatives do I have? I would prefer open source, and preferably something nimble and transparent as well.
EDIT:
With realtime data I mean a data that refreshes quicker than my monitor does.
I would prefer the data to update 'straight through' and not keep any specific refresh rate on the browser side. The data is to be shown in a regular tabular format, I don't need any fancy graphics. Please note at this stage I am not using any particular scripting framework. That is the purpose of this question, to figure out which one I should use.
I don't know what scripting language and data source your using but this will give you a direction.
Display data updates in real-time with AJAX
On the presumption that the data is retrieved using AJAX, you're after a "polling consumer pattern". In a nutshell, you make a request for your data and it will be blocked by the server until new data is available (or your request times out). When you receive your data, you poll for it again. In the event that you get an error (timeout, server failure etc.) then you might want to implement some back-off policy before trying again.
'hope that this helps.
I am trying to design a system that will catch emails that are submitted to a server and process them. The processing will probably mean parsing metadata such as ip addresses, the email body and subject etc., and submitting these details to a web service.
I'd like to know what software (SMTP servers or similar) can either be integrated with, to perform arbitrary processing, or which servers will support forwarding to a web service (with queuing and retrying) out of the box.
Exchange is not a preferred option, as I'd like to keep that off the live servers.
It's probably easiest to just use any mail server and just process messages by pulling them directly out of the mailbox of the email user(s) on the system via IMAP or POP3 or something similar. Some mail servers are built with 3rd party access in mind where can register for events on new mail arriving so you don't have to poll for new messages. Different mail servers have different native access protocols and APIs. Exchange has IMAP and Exchange Web Services (EWS), Domino has C++/COM APIs, GroupWise a web service. And all are going to support some kind of default client access protocols like IMAP and POP3. The native protocols will give more features (like notifications), but for your purposes IMAP or POP3 may be enough.
There are a ton of ways to do this. A lot of this can also depend upon price, client, and the tech environment.
If this is a MS environment, and cost is a factor, one way you can do this is to use the built in IIS SMTP Service. The IIS SMTP service is most commonly used for sending emails, however, it can be configured to actually accept email. If you configure the service for a domain, all incoming email for that domain is placed in the mailroot/drop directory. It's placed as a text file ({guid}.eml format).
You can just use a Mime parser to parse these files, and then perform any business rules you want to on them. Since this is done at the filesystem level, you won't have to worry about intercepting any network calls. Just grab a file, parse it, and move to the next file. Then, have your app sleep for X seconds, and then check to see if any new emails have come in.
PS: shameless plug -- you can use aspNetMime for parsing these files, and extracting the data.
It might be worth your while to implement a simple lexing/parser to parse the header and look out for a specific header information. If it is an IP address, you may get away with it by using a regex parser.
Ideally, you should connect to the POP3 port of the server where the emails are stored and do a quick scan of the information, if the subject line or the message contains a specific string indicating that it is the email or even the IP address within the header, then you can retrieve that email and process it, this is where I think the lexing/parsing of the email would be done initially prior to pulling it down based on the criteria.
Bear in mind, you do not want to pull down an email that is not relevant, then how would you deal with it.
Maybe DotNetOpenMail might help, as it does not necessarily pull down all emails...
Hope this helps,
Best regards,
Tom.