I want to damage one or more tables to test the repair and analysis process. How can I damage the tables of a mysql database with php?
I searched on Google and did not find anything useful
Related
Till yesterday, my database had only one table called id1753536_local.
today when I looked at my databases I found another table named information_schema
what does it mean? I have performed several sql injection attacks on websites and they all contained a table called information_schema. so is my site also vulnerable to sql injection? or it is usual?
Incase you still have questions, here is the take away from what #deceze 's posted to answer your question.
INFORMATION_SCHEMA is a database within each MySQL instance, the place that stores information about all the other databases that the MySQL server maintains. The INFORMATION_SCHEMA database contains several read-only tables. They are actually views, not base tables, so there are no files associated with them, and you cannot set triggers on them. Also, there is no database directory with that name.
And regarding if it is a vulnerability, no it is not. It does not have any security risks in which you should be concerned with.
I am a novice user of Mysql database and recently my company is evaluating MySQL Cluster for its shared nothing architecture and synchronised replication.
However my colleagues informed me that the MySQL clusters will have limitation on the number of table joins you can do on one SQL statement that is only joining not more than 5 tables. He went to a MySQL training 1 or 2 years back, and I could not find any article or documentation on this matter.
My web application usually query from 5-6 tables in a single SQL
statement. A table usually has > 200 columns and not more than 3
columns as combination Primary Key. The table tuples has more than 10
million in extreme case. The MySQL cluster will be spanning across 2
data centers with more than 10 data nodes. Table storage engine will be NDB for the replication capabilities.
Please let me know if more information is needed.
Any expert here is with MySQL cluster knowledge about this and able to share ? I appreciate any links or article to back the claims, thank you.
P.S. please note the version of the mysql cluster in the title
EDIT: To clarify throughout this post: when I say "schema" I am referring to "data-model," which are synonyms in my head. :)
My question is very similar to this question (Rails: Multiple databases, same schema), but mine is related to MySQL.
To reiterate the problem: I am developing a SAAS. The user will be given an option of which DB to connect to at startup. Most customers will be given two DBs: a production DB and a test DB, which means that every customer of mine will have 1-2 databases. So, if I have 10 clients, I will have about 20 databases to maintain. This is going to be difficult whenever the program (and datamodel) needs to be updated.
My question is: is there a way to have ONE datamodel for MULTIPLE databases? The accepted answer to the question I posted above is to combine everything into one database and use a company_id to separate out the data, but this has several foreseeable problems:
What happens when these transaction-based tables become inundated? My 1 customer right now has recorded 16k transactions already in the past month.
I'd have to add where company_id = to hundreds of SQL queries/updates/inserts (yes, Jeff Atwood, they're Parametrized SQL calls), which would have a severe impact on performance I can only assume.
Some tables store metadata, i.e., drop-down menu items that will be company-specific in some cases and application-universal in others. where company_id = would add an unfortunate layer of complexity.
It seems logical to me to create (a) new database(s) for each new customer and point their software client to their database(s). But, this will be a headache to maintain, so I'm looking to reduce this potential headache.
Create scripts for deployments for change to the DB schema, keep an in house database of all customers and keep that updated, write that in your scripts to pull from for the connection string.
Way better than trying to maintain a single database for all customers if your software package takes off.
FYI: I am currently with an organization that has ~4000 clients, all running separate instances of the same database (very similar, depending on the patch version they are on, etc) running the same software package. A lot of the customers are running upwards of 20-25k transactions per second.
A "database" in MySQL is called a "schema" by all the other database vendors. There are not separate databases in MySQL, just schemas.
FYI: (real) databases cannot have foreign keys between them, whereas schemas can.
Your test and production databases should most definitely not be on the same machine.
Use Tenant Per Schema, that way you don't have company_ids in every table.
Your database schema should either be generated by your ORM or it should be in source control in sql files, and you should have a script that automatically builds/patches the db. It is trivial to change this script so that it builds a schema per tenant.
We have a MySQL database based on InnoDB. We are looking to build an Analytics system for this data. We are thinking to create a cloned database that denormalizes the data to prevent join and uses MyIsam for faster querying. This second database will also facilitate avoiding extra load on the main database to which the data will be written.
Apart from this, we are also creating some extra tables that will store aggregated numbers to avoid recalculation.
I am wondering how can I sync these tables once every day to keep them updated. It looks similar to Master-slave config of MySQL which uses binary log. But in our case, the second database is not an exact slave. Are there any open-source reliable tools or any other ideas which I can use to write an 'update mechanism'?
Thanks in advance.
I would like to ask for help on how it would be best to replicate 4 tables from our OLTP production database into another database for reporting and keep the data there forever.
Our OLTP database cleans up data older than 3 months and now we have a requirement that 4 of the tables in that OLTP database need to be replicated to another database for reporting data should never be removed from those tables?
The structure of the tables is not optimal for reporting so once we have replicated/copied the tables over to the reporting database we would select from those tables into new tables with slightly fewer columns and slightly different data types. (e.g. they are using money data type for date for few columns).
It is enough if the data is replicated/copied on nightly basis but can be more frequently.
I know this is not detailed information I provide here but this is a rough description of what I have at the moment. Hopefully this enough so that someone could provide me with some suggestions/ideas for me.
Any suggestions for a good solution that would put the least amount of load to the OLTP database is highly appreciated?
Thanks in advance!
Have staging tables where you load new data (e.g. each night you can send data over for the previous day), then you can insert with transformations into the main history table on the reporting server (then truncate the staging table). To limit impact on the OLTP server, you can use backup / restore or log shipping and pull the data from a copy of the production database. This will also have the added benefit of thoroughly testing your backup/restore process.
Some others might suggest SSIS. I think it is overkill for a lot of these scenarios, but YMMV.