I am working on a web application that is based on a MySQL database. I need to collect and analyse usage and performance statistics. The statistics will be aimed at non-technical personnel.
How can I implement this feature? You should treat my question as a programming question but in case you know of a tool or extension that would be suitable please mention it.
The official MySQL client, MySQL Workbench includes the feature to visualize Performance Schema since version 6.1. It's in Performance section in the software.
Read more at: http://mysqlworkbench.org/
Related
I want to use aws for hosting a django application and use aws rds for database purpose. The application is kind of blog like system.
I am not able to decide which RDS I should choose over MySql or Postgres? Both price wise and performance wise according to aws pricing policy.
This can be very broad and may be opinionated , I would try to keep it short as i read it somewhere:
MySQL would be very good for any CMS Site as it works very well with it and MyISAM tables are quite nice here.
From What I read where PostgreSQL does better than MySQL:
Multi-application databases
Advanced data modelling
What Advance Data Modelling means is that PostgreSQL is far more mature at doing complex data modelling than MySQL is. It has a very mature extensible type system, a wide range of procedural languages, and a great deal of flexibility in how these languages can be plugged into existing queries.
If that wasn't enough, the fact is that you can essentially build your data model in PostgreSQL based not only on what information you are storing but what information is commonly derived from what you are storing. This makes things like not-first-normal-form actually sane to use where they are needed. Add collections and multiple inheritance in table structure and you have a very sophisticated data modelling platform, this blog would help you understand it better.
Besides the content management system market, MySQL's other major market is in applications where data is not expected to be exposed to more than one writing application at a time. This leads to a significant difference in handling data validation, etc.
In PostgreSQL validation is always equally strict. If the app expects special error treatment it had better call functions or casts to handle this explicitly.
MySQL however places the application in charge of defining the data validation rules.So while PostgreSQL allows the relational and object-relational interface to serve as a public API, it is essentially intended largely to be a private API for applications in MySQL. This is a huge difference and not readily understood by many people trying to make the choice. This leads to major differences in application design.
MySQL is a data storage and reporting solution for your application.
PostgreSQL is a data centralization, modelling, and reporting solution
for your organization. The two are remarkably different.
Now coming to Second Question based on pricing as you can see from MySQL Pricing Page and PostgreSQL Pricing Page MySQL is bit cheaper than PostgrSQL , reading on the answer you can make informed decision what would be best for you.
Hope this Helps!
I'm gonna offer you a 3rd option: Aurora - try it. It's cheaper than those 2 and is MySQL compatible.
This article may be of help to you when deciding.
For simple blog-like thingie I'd go with MySQL (or Aurora MySQL compatible version)
For data-critical and highly relational solutions I might also consider Postgres (Aurora)
Nearly all database implementations offer the possibility of creating indexes, based on various data structures, that greatly speed up search speed.
Is it possible for any database - especially the most-used ones, such as MySQL, Postgres, MongoDB, etc - offer the ability to see how records are being stored? As in, to actually see the B-tree?
In Postgres you can use the pageinspect extension. It provides functions that allow you to inspect the contents of database pages at a low level.
MySQL doesn't have any official tools to view internal index data structures, but developer Jeremy Cole created a set of tools to do that. He wrote about what he discovered in a series of blog posts:
https://blog.jcole.us/innodb/
He demonstrates his InnoDB inspection tools, which he made freely available on Github.
PoWA is a profiling tool for PostgreSQL using pg_stat_statment module and provide a very complete set of data in a web interface.
I'm searching an equivalent for MySQL, with a Web interface (or a separate tools who provide an interface), to profile requests performances and database general performances, but I can't find good tools for MySQL.
Someone would know a good profiling tool for mysql?
We have a built a new data fusion C++ algorithm which uses SQLite as an internal database.
However, we would like each of the multiple C++ threads to do a parallel db write and SQLite cannot do that.
So we are now looking at MySQL which allows each of the multiple C++ threads to do a parallel db write.
However, the MySQL non-GPL licence is too costly and we don't want to rely on Oracle for MySQL support since our data fusion C++ algorithm will soon have a US patent.
Are they are any alternatives to MySQL which allows each of the multiple C++ threads to do a parallel relational database write which do not have a costly licensing policy like ORACLE MySQL?
So far, I am starting to look at PostgreSQL's BSD license and Sybase open source relational database.
Could someone tell us if PostgreSQL or SYbase is the right direction to go in?
PostgreSQL is definitely a very good alternative to MySQL.
In my opinion PostgreSQL is actually the better choice anyway looking at all the things that MySQL doesn't get right and the number of SQL features that they still don't have.
But again that's my personal opinion.
In terms of licensing the Postgres license is indeed more flexible for commercial usage than the GPL.
The support from the PostgreSQL community on the mailing list is outstanding - I don't know if there is something comparable in the Sybase world (actually I didn't know that Sybase is now OpenSource).
There should be quite a few options. If you're not worried about being cross platform, you could try SQL Server Express. You can use this in production subject to some limitations (I think the limit relates to the type of hardware you can install it on). There is also an express edition of Oracle with similar usage constraints.
In the open source world, there is Firebird which I believe you should be able to use in embedded mode (that is, without having to install a separate network server process). I haven't used this in production but it has been around for many years and looking through SO, it seems to be well regarded. It uses MPL so there should be no licensing risks.
For completeness, you could consider MaxDB from SAP and the Ingres Database System. MaxDB seems to be a very capable DBMS but when I tried it years ago (version 7.6) it seemed to be extrodinarily difficult to work with. I've never worked with (or heard of anyone working with) Ingres but apparently it's open source and can be freely used.
Like "a_horse_with_no_name", I'm not aware of there being an open source edition of Sybase although I might have just missed it.
Phil
Currently have approximately 2000 simultaneouse connections. We average approximately 425 reads and writes per second. We have a read to write ration of 3:1. All of our tables are myisam. Can we expect better or worse performance when we go from mysql 4.1.22 to 5.0?
There's no way for anyone here to tell you without the schema, queries and test data.
Why not setup a dev environment on 5.0 and testing it out?
The main concern should be that the 5.0 Information Schemas, are a HUGE vulnerability and can be used to very easily gain access to the SQL server from remote locations simply by printing off the schema using injection will let an unwanted viewer, view all of the tables and capitalize off the knowledge to get passwords using the same schema for its columns.
The MySQL source tree includes a set of benchmark tests written as Perl scripts. See The MySQL Benchmark Suite for some information. You can download the source distribution for MySQL 5.0.91 at the archives.
Source distribution of MySQL 4.1 doesn't seem to be easily available anymore. You might have to check it old sources from LaunchPad unless you can find a copy of an old source distribution elsewhere on the internet.
However, the comparison that these benchmarks show is only of general interest. It may be irrelevant to how your application performs. For instance, your usage of the database may not take advantage of some performance improvements in MySQL 5.0, but it may run into some regressions in MySQL 5.0 that were necessary.
The only way to get an answer that is relevant to your application is to try the new software with a test instance of your application, using a sample of data that is a realistic model of the type and volume of data your application typically deals with. As #BenS says, no one on a site like StackOverflow can give an answer specific to your application.
You say in a comment that you're very concerned about performance, but if you don't have an instance of your application and database that you can run tests on, you aren't doing the work necessary to satisfy this concern.
I would strongly suggest moving straight to 5.1.45 with Innodb Support. Percona provides an excellent version with XtraDB that provides a number of performance related improvements. Moving off of your MyISAM tables and onto Innodb will provide a huge performance increase in almost all cases. If you are going to burn the QA/Testing time to move, do a full move now, not a half-way step.