Query log file of mySQL database for analysis [closed] - mysql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have log files of mySQL database and we want to use that log files for analysis (data mining, machine learning,...)
and iam very new with that.
Can you give me instruction how to do that ?

There are a number of tools that could be useful to you depending on your requirements.
But why use the MySQL logs and not the DB directly, or have the details you are searching for go to a new 'information mining' table in the DB as the user interacts with the interface. Which could be a lot more powerful
If you wish to determine the rate of inserts / queries.
You can actually set up MySQL to log these types of events to different files (so some of the work is already done, rather than having to mine a complete log file of all events).
Otherwise you are going to want to make use of tools such as Grep.
On top of that you have a system called Lucerne (from apache) that will mine the data and search for key words. They have various different hooks for going into Java, C, and others. Its very similar to how google trawl web pages.
Otherwise if you intend to mine the data 'within' the database, then the logs are clearly not your best point of call.
The logs will contain lots of information about the users also. IP address may be more tricky, but you could cross reference the name of the user running the query and the general server logs to determine the IP of the connection.

Related

Estimate ec2 instance for web application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been building a web application for 50k users. My application will include:
APIs + Socket server: NestJS + SocketIO
Database server: MySQL
Frontend server: ReactJS
I'm going to choose EC2 instances for those. Could you help me to choose appropriate instances for each server (eg. t2.xlarge or ...)? My application will have 3 environments: develop, staging & production.
Thanks!
Nobody can provide the information you seek.
Every application is different. Some apps are compute-intensive (eg video transcoding), some are memory-intensive (eg data manipulation) and some are network-intensive (eg video chat). Also, the way users interact with apps are different with each app.
The only way you will know the "appropriate instances for each server" is to setup a test platform, select a particular server configuration, then simulate typical usage of your application with the desired number of users (eg 50k). Monitor each server (CPU, RAM) and find any bottlenecks. Then, adjust instance type and app configurations, and test again.
Yes, it's a lot of work, but that's the only way you'll really know what system sizes and configurations are required. Or, of course, you can simply get real users on your app, monitor it very closely and make changes on-the-fly.

How to remote connect to SQL server via php website [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a laravel website and a MySQL database for it.
My website has few users, and any user has a SQL database on his computer and that database has the same name, same password, and same configuration.
Now I want my user login to my website and via website information, make changes to his SQL database on his computer.
How can I do this?
How can I connect to any SQL database on user's computer by my website?
The direct answer to your question is that if you know the details of your users' machines, you can create connection strings for them in your config file, and use those connections to open MySQL sessions on the client machines - see https://laravel.com/docs/5.5/database, section "Using Multiple Database Connections".
This assumes all the machines are accessible from your server - presumably because they are on the same, local, non-internet-accessible network.
If your user's machines are accessible from the Internet, please do not do this - they will get hacked. It's a question of "when", not "if".
It's also a pretty horrible solution from an application architecture point of view - presumably the databases on the users' machines expect certain things about the database to be true, and your application would have to guarantee all those things. For instance, your application might expect "all orders have a valid customer; all customers have a valid country code". That's hard enough to guarantee on a single database, but on a distributed system, it's really hard.
It's much better to use MySQL replication for scenarios like this.

What engine type would be better in this scenario? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I’m writing an android app that will sync with a MySQL db on my webserver (there will also be a website reading from/writing to the same dB). The android app will store a copy of the data locally in a sqlite db to provide access while offline. If the user creates a row while offline, that record will be uploaded to the server the next time a data connection is available. I’m designing the app and website myself so I have the ability to set it up as I see fit (meaning it doesn’t have to conform to someone else’s server).
The sqlite db will have a column for id (which will represent the id as stored on the server) and a localID column. When the server receives the data, it will acknowledge the new data by returning an array (in json format) of the id numbers as stored on the server.
What would be better for this type of scenario: a transaction-safe engine or non-transaction-safe (such as isam)? It’s my understanding that isam would be faster and take less space to store but I can’t deal with losing data. I’m thinking that if the android app doesn’t receive the confirmation, it would resubmit the data. It seems like that would prevent data loss but I need a second (more-experienced) opinion. If you would go with a transaction-safe db, which would you recommend as I’ve never worked with one?
TIA!
A real database should be your default choice until you've seen that it's not fast enough.
Consider using UUIDs to generate IDs on the client that are guaranteed to be unique on the server.
Have you thought about how you would handle updates from multiple devices that both had off-line changes? You should consider some known patterns for dealing with this kind of synchronization.
Stack Overflow question
Data Replication book

Multiple database per application. Is this better security? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am working on a project which in my mind should only be about 1 database. However, the client insists that separating data into multiple databases would be more secure in the event of a security breach. It kinda makes sense but in the end I think that if a breach occurs, you're most likely to get everything stolen no matter how many databases you have. I guess you're protected if and only if there is no connection between your databases.
My project consists of two types of users, Basic and Paid. My client insists that the basic users should have their own database while the paid users be completely on a separate database. That means I would have to build two login tables in each of the database. The problem comes in the fact that a free user (or even a guest) is allowed to search for a paid user. Well, guess what, I'll have to connect to the paid users database in order to retrieve them. Isn't this the same as having them all in one place? And I did not even mention that users need to have addresses, images and other things associated to them (basic or paid). That is where things would get interesting in trying to enforce integrity between the two databases.
Now, back to the question. Is it more secure to have multiple databases? If yes, why would it be and what must be followed in order not to break that?
Your problem is one of the most common question I've heard from business all the time. You already said totally true concerns from the point of developer view and you are right about them.
Trying to separate tables of database to different servers is NOT going to help for security. Despite that, it is going to cause integrity and synchronization problems which has higher impact than the data breach.
When attackers have an access to your application server, they will also have an access to all databases. Although encrypting of source code -E.g Ioncube for PHP-, ain't gonna help you. On the other hand, one single sql injection vulnerability will lead you to leak whole databases even if you are using two or three separated DB.
I believe they are trying to insists about separate database service from web server.

displaying data from mysql without user input--security concerns [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am working on a basic mysql application that will essentially take all of the data from a database and display it, with the only real user interaction being the ability to sort them. Since there is no place where the user will be able to input any data, I feel relatively safer creating this. The main issue I am worried about is when the app will have to log into the server. I created a separate user name on the server, and limited the privileges in the database so that particular user may only "select" data. I fear that this still exposes the location of the server. Should I encrypt this step? Are there any other security concerns I should guard against given this scenario?
If you need more information, I will happily add it on here.
UPDATE
It may be useful to know that the main function of this is to catalog merch for the purpose of commerce. None of the info in this database will be private. The commerce will occur with the help of a third party ecommerce site. I want to access the database using php mysqli and display it as html. The site is on a shared commercial server.
You should still use prepared queries for making the SQL queries in case the system later needs to support more kinds of queries (or if the system DB accounts are misconfigured or something similar.).
The location of your server doesn't need to be hidden, as long as it's sufficiently protected. You shouldn't rely on the attacker not knowing where the server is because they can usually figure that out. If your application knows the location of the server, it's safe to assume that the attacker can get it as well.
Instead focus your security on securing the server. Minimize the accounts on the server, ensure they have non-default (and non-trivial) passwords, make sure the software is up to date, etc.