Huge data transfer usage on RDS w/ MySQL - mysql

We started using RDS last month for our database needs but we're seeing "data transfer in" usage of about 3~6gb EACH DAY. Our database size is about 4gb. How's that possible? Is this some misconfiguration on my part?
We're also seeing 8~14gb of "data transfer out" each day and I really can't say why.
It's my first time using AWS (we're also using S3 but I've checked the reports and everything is accurate there) so I'm kinda lost.
For context, our application is built in JSF2 and we use Hibernate. We also have a web service running on PHP for a mobile application. We expect anywhere between 20~200 users daily 24/7.
I've set up the security groups to only allow inbound from our servers (and I removed all rules for outbound, is that fine?).
Our instance: Single-AZ class db.t2.micro

Related

LOCAL MS ACCESS Database Sync Operation to a CLOUD Backend W/ Poor Internet

Background
We use MS Access to manage some of our work, but often times our operations are in highly remote locations where cell or sat signal are the most reliable forms of connectivity. But the service isn't fantastic. Coverage will drop sometimes (more often than not) in the middle of an update.
Setup
Current set up has a Backend File which stores only the tables and a few simple routines that is stored on a cloud based server and a Frontend File that is stored on the user's machine. In order to make updating and usage of the system even feasible we had to create copies of the tables on the Frontend so that the user could run updates effectively then provided a sync button that essentially just appends the information from the Local Table to the Cloud Table. Otherwise the user would have to wait for the server to respond with each entry which was extremely slow.
Problem
However, nearly anytime they run this process it stops in the update and corrupts the file. So we switched to just using a simple excel export that they can then email to the main office and someone at the office imports the file to update their local tables and syncs to the Cloud Tables in the Backend File.
General Notes
I believe we've narrowed it down to being an issue with MS ACCESS and poor internet connectivity, because all systems work when the internet connection is even just reasonably decent. Are there any work arounds available that will resolve this issue?

Alternative access to application files when server is down

I have an application that generates some reports at every hour. These reports are very critical (and sensitive) to the users and the only access is through the application (excel/pdf generation in memory with database) with previous user/password/role validation.
Last week the server that host the application shut down for several hours (hardware failure) and the users could not retrieve those reports (and i cant access to the db inmediatly).
My client needs to at least access the last generated reports. For example, if the failures occurs at 5 pm, he needs the report of the 4 pm.
So, i thought in store the reports in other place. The server/network administration is not my responsability. I dont have another server (and i cant avoid the network or hardware failures for ever), but i have a hard drive connected to the same server network (NAS).
Also i am thinking in storing the reports in Google Drive (client G suite with some encryption) or some other cloud service. But i am aware that i need permanent internet access.
¿What do you recommend me to do?
Have a nice day.
The best approach uses Nginx and creates multiple instances of the executable file and point to it if one instance stay down, the other instance will serve and the app will be live

Risk of data corruption over WAN with shared Access database

I am developing an application which uses as a back-end an MS Access database (.mdb, not my decision). Recently I came across someone suggesting that using JET engine over WAN is not really a good idea, with a high risk of data corruption. Since my application should be doing just that (connecting to database on NAS (EDIT: not NAS, shared shared network drive), I got worried. It is really that risky? If so, is there any work around or is an MS Access database just unusable for that kind of application?
EDIT
The front end is .NET windows desktop application in C# (WPF). The system does not have many users, max 10. Most of the time they will approach the database from LAN and 99% of writing to the database will be done within the LAN (from the area of the company). However there are some cases where they will connect to the NAS (EDIT: not NAS, shared shared network drive) from outside the company via network (from their home).
If you have a 100 Mb/s fibre, it will be OK, but if your line is, say, an xDSL line, it is generally an absolute no-no.
Convince the powers that be to move the backend to a server engine like SQL Server where the Express version is free.
The scenario you describe is not a good fit for having an Access database as the back-end. The WAN users could very well find the application slow, but the NAS is the real cause for concern regarding corruption, and that would affect both LAN and WAN users.
Many (most?) NAS devices run on Linux and use Samba to provide Windows file-sharing services. The Access Database Engine apparently uses some low-level features of "real" Windows file sharing that Samba does not always fully implement (ref: here).
In fact, the only time I've seen repeated corruption problems with a shared Access back-end (and a properly distributed front-end) was when a client moved their file shares from an older Windows server to a newer NAS device. The Access application continued to work for the most part, but every few months they would find that the primary keys of some tables would disappear after they did a Compact and Repair on the back-end database file. That never happened while their file share was on the Windows server.
Splitting a front-end from a back-end removes the majority of the risk of corruption. Of course, with Access there's always the possibility and if you're looking for something that reduces the risk to close to nil then you might want to consider SQL Server or MySQL. However, using Access is fine as long as you take proper precautions.
For example, you might want to look into record-locking on tables that will get edited, to prevent multiple simultaneous writes. Backing up your DB on a regular basis is always good, too.

Hosted Database v Cloud Database

I have looked everywhere...
whats the difference between a hosted database and a cloud database they seem like the same things?
Thanks
Both "hosted database" and "cloud database" mean that the database lives on the servers of some external provider/hoster.
The hoster might even be the same in both cases.
The main difference is that the "cloud" plans are usually meant to scale more (at a higher monthly fee), so you'd use them when you expect your site to get huge soon and need to quickly adjust server capacity when needed.
On the other hand, "hosted" plans are not that expensive, but have more limitations (server speed, database size...) and are more suited for "small" websites.
Of course this isn't by any means an "official" description of the two terms, but that's the impression that I get every time I see "cloud" or "hosted" webspaces/databases/services/whatever.
It depends on the context in which they're being used, but, yes, they usually mean the same thing. When I see the term cloud database being used they are usually referencing some cloud platform like Amazon EC2 or Microsoft Azure instead of GoDaddy or HostGator or something. Plus, cloud is the new buzz word - I'm sure it sells better. Lol.
As Christian Specht said, the cloud servers really scale more. So why you need more scaling? and why there are many featured options in cloud database service selection?
Things are not like before. Before smartphones and earlier pc operating systems, users gets information from the server only when they log on the specific web page using their credentials. But now apps like facebook shows notifications, provide ads etc and collect/push other data in parallel while we are looking at something else irrelevant.
Hosted database are reliable to access the database when users log onto the web page. But in case of the lastest smart phone applications, it needs to access the database everytime starting from its birth (installation on the device). So for each installation, the minimum workload over the server is expected to raise up.
So more scalability is required here. More simultaneous connections, Input/Output operation requests are expected daily. So with the dedicated servers with the core purpose, and with the configurable package selection based on your expectation of user count and bandwidth usage, Cloud Service is not yet another marketing term, but is a helpful service.

How does a server farm handle a database?

I have been making some research in the domain of servers for a website I want to launch. I thought of a certain configuration of a server with RAID 10 implemented with a NAS doing the backup which has a RAID 10 configuration as well. This should keep data safe in 99.99+ of cases.
My problem appeared when I thought about the need of a second server. If I shall ever require more processing power and thus more storage for users, how can I connect a second server to my primary one and make them act as one what the database (mySQL) is regarded?
I mean, I don't want to replicate my first DB on the second server and load-balance the request - I want to use just one DB (maybe external) and let the servers use it both at the same time. Is this possible? And is the option of backing up mySQL data on a NAS viable?
The most common configuration (once scaling up from a single box) is to put the database on its own server. In many web applications, the database is the bottleneck (rather than the web server); so the first hardware scale-up step tends to be to put the DB on its own server.
This also allows you to put additional security between the database and web server - firewalls are common; different user accounts etc. are pretty much standard.
You can then add web servers to the load balancer, all talking to the same database, as long as your database can keep up.
Having more than one web server also helps with resilience - you can have a catastrophic hardware event on one webserver and the load balancer will direct the traffic to the remaining machines.
Scaling the database server performance is a whole different story - though typically you use very beefy machines for the database, and relative lightweights for the web servers.
To add resilience to the database layer, you can introduce clustering - this is a fairly complex thing to keep running, but protects you against catastrophic failure of a single machine.
Yes, you can back up MySQL to a NAS.