Is mongo or sql database always active? - mysql

I'm just start to learning about backend and database things, I have some confuse that i hope everyone can explain for me
1. Is backend server always active ( this case is MongoDB )?
I work with front-end before, and i can always make a network request to that backend server, and example if i make a database with local storage in my computer, is that my computer have to power on everytime to make the request things?
2. So how we handle that problem?
We also have serverless database like firebase for example, but for large user base, it will cost a lots, so in my opinions, if want to build large project, we will use other like SQL or Mongo,but i wonder, for individual like me, is that anyway to have that type of server that always can request? As above, if true, so we need to run the computer all the time, or we have another solution that keep backend up all time, like we always can get data and update data to it, even we turn off computer?
Sorry for these question maybe look hilarius to other, but i don't know muc about backend  😅

Related

MySQL update remote server from local server

I am after some advice please. I am no developer and outsource my work requirements to various freelancers. I have a specific requirement but due to my lack of skills I’m not quite sure what to ask for, hence my question here.
I have a system where i have several Raspberry Pi "drones" that collect data. These drones are all connected to the web and at present instantly send the data via a live feed direct to a MySQL server hosted at Amazon. This server is accessible via a static IP address.
Each drone is given a unique ID and the data collected is tagged with that ID so we know where it comes from.
The existing MySQL server collects and processes all this data and we have a website that displays the stats. Nothing really complicated and the current system works very well.
The issue i have is we occasionally have internet connection issues from the drones so i want to make the whole system more robust. When the drones do have a connection issue we lose data as the drone do not store anything which is what I want to resolve.
Just as a heads up… due to the data structure the drone will not write to a file, they have to feed direct to a MySQL server.
To resolve this issue my Plan is to have a MySQL server run on each RPI with the same table structure etc as the main server. Each RPI will write to its own local MySQL server and i then need that server to "update" the main server at Amazon. Please note the data will only ever be sent in this direction, it will never come from Amazon back to the drones. When the drone can communicate with the main server I would like the drone based MySQL server to communicate pretty much instantly ( or as close as i can get it ) but where there is an internet connection issue i need the drone to store its own data until the internet connection is restored at which point it will update the main server.
As i have said, i am no developer so i wouldn’t be undertaking this work myself but i would like to know what i need to ask for in order to get the right system.
If anyone can help i would appreciate some pointers. In addition if this is the type of work you could undertake please feel free to let me know and maybe we could talk further via PM, after all … someone needs to do it 
Many Thanks.
I recommend to use a schedule update to the Amazon Database, using the programming language that you are already using or whatever, something that looks like:
While(gattering data){
Store data into local MySQL
for(each record in local MySQL){
if(there is internet){
store record in remote MySQL
optional: read remote record to check data was correctly stored
delete record in local MySQL
}else{
break;
}
}
}

Storing MySQL credentials in a MySQL database

This is a similar question to "Storing MS SQL Server credentials in a MySQL Database"
So, in theory, imagine I have 1 MySQL server. I have a "master" database, and then X number of other generic databases. What im looking for, is a way of using an app (for arguments sake, lets say a web app, running on php) to first access the master database. This database then needs to tell the app which database to connect to - in the process, giving it all the credentials and username etc.
How is the best way around this?
The three ideas I have so far
Store the credentials in the master database for all the other databases. These credentials would of course be encrypted in some way, AES probably. The app would get the encrypted credentials, decrypt, connect.
Store the credentials elsewhere - maybe a completely separate server. When the master database is accessed, it returns some sort of token, which can be used to access the credential storage. Again, encrypted via AES.
Using some sort of system that I am not aware of to do exactly this.
Not doing this at all, and come up with a completely different approach.
To give a little example. "master" would contain a list of clients. Each client would contain it's own separate database, with it's own permissions etc.
I've had no reason to do this kind of thing myself but your first two ideas sound good to me and (as long as you include server address) not even necessarily separate ideas (could have some clients on the server with master, and some elsewhere) the client logic won't need to care. The only issue I can see is keeping the data in the "master" schema synced with the server's security data. Also, I wouldn't bother keeping database permissions in the master schema as I would think all clients have the same permissions, just specific to their schema. If you have "permissions" (settings) that limit what specific clients can do (perhaps limited by contract/features paid for), I would think it would be much easier to keep those in that clients' schema but where their db user cannot change data.
Edit: It is a decent idea to have separate database users in this kind of situation; it will let you worry less about queries from one user's client inadvertently (or perhaps maliciously) modifying another's (client account should only have permissions to access their own schema.) It would probably be a good idea to keep the code for the "master" coordination (and connection) somewhat segregated from the client code base to prevent accidental leaking of access to that database into the client code; even if encrypted you probably don't want them to even have any more access than necessary to your client connection info.
I did something like this not long ago. It sounds like you're trying to build some kind of one-database-per-tenant multi-tenant system.
Storing encrypted credentials in a directory database is fine, since there's really no fundamentally different way to do it. At some point, you need to worry about storing some secret (your encryption key) no matter what you do.
In my use case, I was able to get away with a setup where the directory just mapped tenants to db-hosts. The database name and credentials for each tenant were derived from the tenant's identifier (a string). So something like, given a TenantID T:
host = whatever the directory says.
dbname = "db_" + T
dbuser = T
dbpass = sha1("some secret string" + T)
From a security standpoint, this is no better (actually a bit worse) than storing AES encrypted credentials in the directory database, since if someone owns your app server, they can learn everything either way. But it's pretty good, and easy to implement.
This is also nice because you can think about extending the idea a bit and get rid of the directory server entirely and write some function that maps your tenant-id to one of N database hosts. That works great until you add or remove db servers, and then you need to handle shuffling things around. See how memcache works, for example.
You can use Vault to do this in much systematic way. In fact this is a strong use-case for this.
Percona has already written a great blog on it,

Reliability Android when connection is off

I'm developing an App where I store my data in a DB online using HTTP POSTO and GET.
I need to implement some reliability to my software, so if the user presses the button, and there is no connection, the data should be stored in something (file? sqlite?) and then when the connection is again on, send the HTTP request to send data.
Any advices or pieces of code to show me how to do this?
Thanks.
Sounds good and pretty forward for me. Just go.
You use a local sqlite db as "cache". To keep it simple, do not implement any logic about that into your apps normal code. Just use the local db. Then, separately, you code a synchronizer. That one checks for the online connection and synchronizes the the local sqlite database with a remote database, maybe mysql.
This should be perfectly fine for all applications that to not require immediate exchange of the data with other processes all the time.
There is one catch, though: the low performance of sqlite on bigger data sets. That is an issue with all single file database solutions. So this approach probably is only valid for small data sets in total, or if you can reduce the usage of the local database to only a part of the total data, maybe only the time critical stuff.
Another workaround might be to use joins over two separate databases, the local and the remote one. But such things really boost the complexity of code, so think thrice if that really is required.

Best way to report events / read events (also MySQL)

So I'm going to attempt to create a basic monitoring tool in VB.net. Now I'd like some advice on how basically to tackle the logging and reporting side of things so I'd appreciate some responses from users who I'm sure have a better idea than me and can tell me far more efficient ways of doing things.
So my plan is to have a client tool, which will read from a MySQL database values and basically change every x interval, I'm thinking 10/15 minutes at the moment. This side of the application is quite easy, I mean I can get something to read a database every x amount of time and then change labels and display alerts based on them. - This is all well documented and I am probably okay with that.
The second part is to have a client that sits in the system tray of the server gathering the required information. Now the system tray part I think will probably be the trickiest bit of this, however that's not really part of my question.
So I assume I can use the normal information gathering commands and store them perhaps as strings and I can then connect to the same database and add them to the relevant fields. For example if I had a MySQL table called "server" and a column titled "Connection" I could check if the server has an internet connection for example and store the result as the value 1 for yes and 0 for no and then send a MySQL command to the table to update the "connection" value to either 0/1.
Then I assume the monitoring tool I can run a MySQL query to check the "Connection" column and if the value is = 0 change a label or flag an error and if 1 report that connectivity is okay?
My main questions about the above are listed below.
Is using a MySQL database the most efficient way of doing something like this?
Obviously if my database goes down there's no more reporting, I still think that's a con I'll have to live with though.
Storing everything as values within the code is the best way to store my data?
Is there anything particular type of format I should use in the MySQL colum, I was thinking maybe tinyint(9)?
Is the above method redundant and pointless?
I assume all these database connections could cause some unwanted server load, however the 15 minute refresh time should combat that.
Is there a way to properly combat delays with perhaps client updating not in time for the reporter so it picks up false data, perhaps a fail safe for a column containing last updated time?
You probably don't need the tool that gathers information per se. The web app (real time monitor) can do that, since the clients are storing their information in the same database. The web app can access the database every 15 minutes and display the data, without the intermediate step of saving it again. This will provide the web app with the latest information instead of a potential 29-minute delay.
In other words, the clients are saving the connection information once. Don't duplicate it in the database.
MySQL should work just about as well as anything.
It's a bad idea to hard code "everything". You can use application settings or a MySQL table if you need to store IPs, etc.
In an application like this, the conversion will more than offset the data savings of a tinyint. I would use the most convenient data type.

Query CSV File/general database questions

OK so I'm kinda new to databases in general. I understand the basic theory behind them and have knocked up the odd Access DB here and there.
One thing I'm struggling to learn about is the specifics of how e.g. an SQL query accesses a database.
So say you have a scenario where there's a database on a LAN server (let's say it's MS Access for arguments sake). You run some SQL query or other on it from a client machine. Does the client machine have to download the entire database to run said query (even if the result of the query is just one line)? Or does it somehow manage to get just the data it wants to come down the ol' CAT5? Does the server have to be running anything to do that? Can't quite understand how the client could get JUST the query results without the server having to do some of the work...
I'm seeing two conflicting stories on this matter when googling stuff.
And so this follows on the next question (which may already be answered): if you CAN query a DB without having to get the whole damn thing, and without the server running any other software, can the same be done with a CSV? If not, why not?
Reason I ask is I'm developing an app for a mobile device that needs to talk to a db or CSV file of some kind, and it'll be updating records at a pretty high rate (barcode scanning), so don't want the network to grind to a halt (it's a slow bag of [insert relevant insult] as it is). The less data travelling from device to server, the better.
Thanks in advance
The various SQL servers are just that: a server. It's a program that listens for client queries and sends back a response. It is more than just its data.
A CSV file, or "flat file" is just data. There is no way for it to respond to a query by itself.
So, when you are on a network, your query is sent to the server, which does the work of finding the appropriate results. When you open a flat file, you're using the network and/or file system to read/write the entire file.
Edit to add a note about your specific usage. You'll probably want to use a database engine, as the queries are going to be the least amount of network traffic. For example, when you scan a barcode, your query may be as simple as the following text:
INSERT INTO barcode_table ('code', 'scan_date', 'user') VALUES ('1234567890', '2011-01-24 12:00:00', '1');
The above string is handled by the database engine and the code (along with whatever relevant support data) is stored. No need for your application to open a file, append data to it, and close it. The latter becomes very slow once files get to a large size, and concurrency can become a problem with many users accessing it.
If your application needs to display some data to your user, it would request specific information the same way, and the server would generate the relevant results. So, imagine a scenario in which the user wants a list of products that match some filter. If your products were books, suppose the user requested a list by a specific author:
SELECT products.title, barcode_table.code
FROM products, barcode_table
WHERE products.author = 'Anders Hejlsberg'
ORDER BY products.title ASC;
In this example, only those product titles and their barcodes are sent from the server to the mobile application.
Hopefully these examples help make a case for using a structure database engine of some kind, rather than using a flat file. The specific flavor and implementation of database, however, is another question unto itself.
Generally speaking, relational databases are stored on a remote server, and you access them via a client interface. Each database vendor has software that you'd install on your remote computer that would allow you to access the database on a server. The entire DB is not sent back to the client when a query is executed, although it can send very large result sets if you are not careful about how to structure your query. Generally speaking the flow is like this:
A database server listens for clients to connect
A client connects and issues a SQL
command to the database
The database builds a query plan to
figure out how to get the result
The plan is executed and the results
are sent back to the client.
CSV is simply a file format, not a fully functional platform like a relational database.