Saving is very slow - mysql

I use TYPO3 7.6.X with news extension.
I have a category "Events" that contains many Newsitems.
When I add a newsitem in this category "Events" it take a lot of time.
I did a System environment check with the install tools and all is Ok (green).
Also, nothing special in the Chrome network Tool.
What can it be like cause ?

There is likely a significant impact from reference indexing, especially if you use IRRE relations from within your records, or use heavily populated references.
You can use this extension that I created as a way to defer reference indexing from occurring on-the-fly to being processed as a queue (by a cronjob).
https://github.com/NamelessCoder/asynchronous_reference_indexing
For a TYPO3 7.6 compatible version you have to download/require the 1.2.0 version which is the last one to support 7.6:
https://github.com/NamelessCoder/asynchronous_reference_indexing/tree/1.2.0
If you composer require namelesscoder/asynchronous-reference-indexing and use composer to install TYPO3 you would get the right version automatically but for manual installs you have to select it yourself.
The "Background" section of the README.md explains a bit more in detail about why reference indexing sometimes becomes a very narrow bottleneck for performance in backend.

It is a bit strange to give you a proper answer but let me try to explain. Probably to reason behind the slow process could be:
Server response issue
TYPO3 installation or Extension issue.
To determine the issue, you can follow the checklist below:
Check server configuration (You can connect with your server administrator)
Enable the Apache server log (php.ini file).
Update TYPO3 version to the latest version.
Update extension to the latest version.
Enable TYPO3 error & deprecation reporting (Dir: typo3conf/ext/LocalConfiguration.php).
Check the system log and Apache log to ensure what's wrong going on.
Hope this will help you!

Related

MySQL Cluster Auto-Installer

I am using MySQL Cluster Auto-Installer. I click the next button while keeping the default configurations. But finally when i click Deploy and start cluster it will give me the following error.
I cant find any information regarding this message in the web.
#dennypanther, It's always in the config dir such as: /usr/local/mysql/mysql-cluster/ and It´s called "ndb_49_cluster.log".
In addition, maybe you could find it within a folder named with the same node id, for example: /usr/local/mysql/mysql-cluster/49/.
It depends how you configure it in the installation process.
I guess you resolved this problem a long time ago, however I try to help you as much as posible.
This looks like a bug fixed in recent 7.5 release of MySQL Cluster.
I did the bug fix myself, there was an issue with
not handling Windows character set properly that
made the start of the management server fail.
Fixed by handling also Windows character set
in messages from OS. Don't recall exact version
it was fixed in, but definitely fix is in latest
7.5.

Unwanted code being inserted into pages

Some of our ColdFusion sites are having the words "coupon" inserted into their footer with a link to another site. Is there anything I can do to prevent this? Is there any software I can run to help detect any vulnerabilities? It doesn't seem to be SQL injection as the databases seem fine and nothing unusual is showing up in the logs.
There are several variations of attacks that produce this sort of result (appending a link to some malicious or nefarious site). For example, this one (Script Injection) uses the latency between a file upload and checking to insert executable code on your server.
Other attack vectors include FTP (which is why you should not use it), or other file transfer protocols. In your case the infected machine may not be the server. It could be a client machine with access to the server - a developer who has set up FTP to the server for example.
Let me know if you need formal help - we have a good track record fixing this sort of thing. If you get more clues post them and I'll try to help. I will warn you that if this is a server infectionit is at the root level and is so pervasive your only option is to start with a pristine install and reinstall your code. Bad news I know - sorry :(
We had something similar happen when one of our servers was hit by the hack Charlie Arehart describes here:
http://www.carehart.org/blog/client/index.cfm/2013/1/2/serious_security_threat
Have you had these patches?
Another option that I would recommend is searching your site(s) for any use of the <cffile> tag that isn't expected. I had a customer that somehow got a single file that was a backdoor to their site. It was particularly dangerous because it could upload files to any location on the server as well as execute any SQL command against any datasource on the server. In other words, this single file opened the door to all of the sites and databases that were running on that server.
This backdoor file (which was named vision.cfm) was often used to update footers with links to coupon and spam sites. vision.cfm was only 210 lines of code.
The entire server had to be sanitized after this was discovered.

How do you actually use Visual Studio Team System database projects to version Sql Server

How are you supposed to correctly use a Visual Studio Team System database project to implement version control on a sql server database?
This might seem overly generic but everything I've found so far online hasn't helped me in being able to achieve anything useful. I have managed to find functionality that appears to be similar to features that are in Redgate's tool Sql Compare but it definitely didn't seem as intuitive as their product.
From my understanding of how these db projects are supposed to work is that you're able to have a version of the database that either lives in Team Foundation Server (or inside the sql server itself) that you can check out to your local machine work on it and then check in the new changes which would allow for simultaneous development to work normally as it does for coding. Was I misinformed? Or is it just a complicated process to get setup?
Related is then how do you use it to deploy changes to the staging/production servers?
We don't use that, we simply script every thing and put it in source control like any other file and ALL deployments to prod are only through scripts pulled down from source control. I think the real key is that nothing gets put on prod except thorugh a source controlled script. Once the developer can't get his change to prod any other way (Devs should not have prod rights), there is no incentive to not put the change in source control.
Funny you should ask. I am the one responsible for getting our production databases under version control, and we're using Visual Studio Database Edition to do it. It is a fantastic tool. The very nice thing about this tool is that not only will it keep your schema under version control but it will validate your database schema as well and permit you to run code analysis against it. It also allows refactoring operations, and many other things.
Typically we work against a local development database, synch the changes back to VSDE, build the database to make sure there are no warnings or errors, and then create a deployment script for deployment to our production databases.
This is a simplified explanation of what and how we doing this, but I think it gives you a general idea of how it can be used. I'd be glad to answer any more specific questions you have.

How to add a version number to an Access file in a .msi

I'm building an install using VS 2003. The install has an Excel workbook and two Access databases. I need to force the Access files to load regardless of the create/mod date of the existing databases on the user's computer. I currently use ORCA to force in a Version number on the two files, but would like to find a simpler, more elegant solution (hand editing a .msi file is not something I see as "best practice".
Is there a way to add a version number to the databases using Access that would then be used in the install?
Is there a better way for me to do this?
#LanceSc
I don't think MsiFileHash table will help here. See this excellent post by Aaron Stebner. Most likely last modified date of Access database on client computer will be different from its creation date. Windows Installer will correctly assume that the file has changed since installation and will not replace it.
The right way to solve this (as question author pointed out) is to set Version field in File table.
Unfortunately setup projects in Visual Studio are very limited. You can create simple VBS script that would modify records in File table (using SQL) but I suggest looking at alternative setup authoring tools instead, such as WiX, InstallShield or Wise. WiX in my opinion is the best.
Since it sounds like you don't have properly versioned resources, have you tried changing the REINSTALLMODE property?
IIRC, in the default value of 'omus', it's the 'o' flag that's only allowing you to install if you have an older version. You may try changing this from 'o' to 'e'. Be warned that this will overwrite missing, older AND equally versioned files.
Manually adding in versions was the wrong way to start, but this should ensure that you don't have to manually bump up the version numbers to get them to install.
Look into Build Events for your project. It may be possible to rev the versions of the files during a build event. [Just don't quote me on that]. I am not sure if you can or not, but that would be the place I would start investigating first.
You should populate the MsiFileHash table for these files. Look at WiFilVer.vbs thtat is part of the Microsoft Platform SDK to see how to do this.
My other suggestion would be to look at WiX instead of Visual Studio 2003 for doing installs. Visual Studio 2003 has very limited MSI support and you can end up spending a lot of time fighting it, rather than getting useful work don.

How do you manage databases in development, test, and production?

I've had a hard time trying to find good examples of how to manage database schemas and data between development, test, and production servers.
Here's our setup. Each developer has a virtual machine running our app and the MySQL database. It is their personal sandbox to do whatever they want. Currently, developers will make a change to the SQL schema and do a dump of the database to a text file that they commit into SVN.
We're wanting to deploy a continuous integration development server that will always be running the latest committed code. If we do that now, it will reload the database from SVN for each build.
We have a test (virtual) server that runs "release candidates." Deploying to the test server is currently a very manual process, and usually involves me loading the latest SQL from SVN and tweaking it. Also, the data on the test server is inconsistent. You end up with whatever test data the last developer to commit had on his sandbox server.
Where everything breaks down is the deployment to production. Since we can't overwrite the live data with test data, this involves manually re-creating all the schema changes. If there were a large number of schema changes or conversion scripts to manipulate the data, this can get really hairy.
If the problem was just the schema, It'd be an easier problem, but there is "base" data in the database that is updated during development as well, such as meta-data in security and permissions tables.
This is the biggest barrier I see in moving toward continuous integration and one-step-builds. How do you solve it?
A follow-up question: how do you track database versions so you know which scripts to run to upgrade a given database instance? Is a version table like Lance mentions below the standard procedure?
Thanks for the reference to Tarantino. I'm not in a .NET environment, but I found their DataBaseChangeMangement wiki page to be very helpful. Especially this Powerpoint Presentation (.ppt)
I'm going to write a Python script that checks the names of *.sql scripts in a given directory against a table in the database and runs the ones that aren't there in order based on a integer that forms the first part of the filename. If it is a pretty simple solution, as I suspect it will be, then I'll post it here.
I've got a working script for this. It handles initializing the DB if it doesn't exist and running upgrade scripts as necessary. There are also switches for wiping an existing database and importing test data from a file. It's about 200 lines, so I won't post it (though I might put it on pastebin if there's interest).
There are a couple of good options. I wouldn't use the "restore a backup" strategy.
Script all your schema changes, and have your CI server run those scripts on the database. Have a version table to keep track of the current database version, and only execute the scripts if they are for a newer version.
Use a migration solution. These solutions vary by language, but for .NET I use Migrator.NET. This allows you to version your database and move up and down between versions. Your schema is specified in C# code.
Your developers need to write change scripts (schema and data change) for each bug/feature they work on, not just simply dump the entire database into source control. These scripts will upgrade the current production database to the new version in development.
Your build process can restore a copy of the production database into an appropriate environment and run all the scripts from source control on it, which will update the database to the current version. We do this on a daily basis to make sure all the scripts run correctly.
Have a look at how Ruby on Rails does this.
First there are so called migration files, that basically transform database schema and data from version N to version N+1 (or in case of downgrading from version N+1 to N). Database has table which tells current version.
Test databases are always wiped clean before unit-tests and populated with fixed data from files.
The book Refactoring Databases: Evolutionary Database Design might give you some ideas on how to manage the database. A short version is readable also at http://martinfowler.com/articles/evodb.html
In one PHP+MySQL project I've had the database revision number stored in the database, and when the program connects to the database, it will first check the revision. If the program requires a different revision, it will open a page for upgrading the database. Each upgrade is specified in PHP code, which will change the database schema and migrate all existing data.
You could also look at using a tool like SQL Compare to script the difference between various versions of a database, allowing you to quickly migrate between versions
Name your databases as follows - dev_<<db>> , tst_<<db>> , stg_<<db>> , prd_<<db>> (Obviously you never should hardcode db names
Thus you would be able to deploy even the different type of db's on same physical server ( I do not recommend that , but you may have to ... if resources are tight )
Ensure you would be able to move data between those automatically
Separate the db creation scripts from the population = It should be always possible to recreate the db from scratch and populate it ( from the old db version or external data source
do not use hardcode connection strings in the code ( even not in the config files ) - use in the config files connection string templates , which you do populate dynamically , each reconfiguration of the application_layer which does need recompile is BAD
do use database versioning and db objects versioning - if you can afford it use ready products , if not develop something on your own
track each DDL change and save it into some history table ( example here )
DAILY backups ! Test how fast you would be able to restore something lost from a backup (use automathic restore scripts
even your DEV database and the PROD have exactly the same creation script you will have problems with the data, so allow developers to create the exact copy of prod and play with it ( I know I will receive minuses for this one , but change in the mindset and the business process will cost you much less when shit hits the fan - so force the coders to subscript legally whatever it makes , but ensure this one
This is something that I'm constantly unsatisfied with - our solution to this problem that is. For several years we maintained a separate change script for each release. This script would contain the deltas from the last production release. With each release of the application, the version number would increment, giving something like the following:
dbChanges_1.sql
dbChanges_2.sql
...
dbChanges_n.sql
This worked well enough until we started maintaining two lines of development: Trunk/Mainline for new development, and a maintenance branch for bug fixes, short term enhancements, etc. Inevitably, the need arose to make changes to the schema in the branch. At this point, we already had dbChanges_n+1.sql in the Trunk, so we ended up going with a scheme like the following:
dbChanges_n.1.sql
dbChanges_n.2.sql
...
dbChanges_n.3.sql
Again, this worked well enough, until we one day we looked up and saw 42 delta scripts in the mainline and 10 in the branch. ARGH!
These days we simply maintain one delta script and let SVN version it - i.e. we overwrite the script with each release. And we shy away from making schema changes in branches.
So, I'm not satisfied with this either. I really like the concept of migrations from Rails. I've become quite fascinated with LiquiBase. It supports the concept of incremental database refactorings. It's worth a look and I'll be looking at it in detail soon. Anybody have experience with it? I'd be very curious to hear about your results.
We have a very similar setup to the OP.
Developers develop in VM's with private DB's.
[Developers will soon be committing into private branches]
Testing is run on different machines ( actually in in VM's hosted on a server)
[Will soon be run by Hudson CI server]
Test by loading the reference dump into the db.
Apply the developers schema patches
then apply the developers data patches
Then run unit and system tests.
Production is deployed to customers as installers.
What we do:
We take a schema dump of our sandbox DB.
Then a sql data dump.
We diff that to the previous baseline.
that pair of deltas is to upgrade n-1 to n.
we configure the dumps and deltas.
So to install version N CLEAN we run the dump into an empty db.
To patch, apply the intervening patches.
( Juha mentioned Rail's idea of having a table recording the current DB version is a good one and should make installing updates less fraught. )
Deltas and dumps have to be reviewed before beta test.
I can't see any way around this as I've seen developers insert test accounts into the DB for themselves.
I'm afraid I'm in agreement with other posters. Developers need to script their changes.
In many cases a simple ALTER TABLE won't work, you need to modify existing data too - developers need to thing about what migrations are required and make sure they're scripted correctly (of course you need to test this carefully at some point in the release cycle).
Moreover, if you have any sense, you'll get your developers to script rollbacks for their changes as well so they can be reverted if need be. This should be tested as well, to ensure that their rollback not only executes without error, but leaves the DB in the same state as it was in previously (this is not always possible or desirable, but is a good rule most of the time).
How you hook that into a CI server, I don't know. Perhaps your CI server needs to have a known build snapshot on, which it reverts to each night and then applies all the changes since then. That's probably best, otherwise a broken migration script will break not just that night's build, but all subsequent ones.
Check out the dbdeploy, there are Java and .net tools already available, you could follow their standards for the SQL file layouts and schema version table and write your python version.
We are using command-line mysql-diff: it outputs a difference between two database schemas (from live DB or script) as ALTER script. mysql-diff is executed at application start, and if schema changed, it reports to developer. So developers do not need to write ALTERs manually, schema updates happen semi-automatically.
If you are in the .NET environment then the solution is Tarantino (archived). It handles all of this (including which sql scripts to install) in a NANT build.
I've written a tool which (by hooking into Open DBDiff) compares database schemas, and will suggest migration scripts to you. If you make a change that deletes or modifies data, it will throw an error, but provide a suggestion for the script (e.g. when a column in missing in the new schema, it will check if the column has been renamed and create xx - generated script.sql.suggestion containing a rename statement).
http://code.google.com/p/migrationscriptgenerator/ SQL Server only I'm afraid :( It's also pretty alpha, but it is VERY low friction (particularly if you combine it with Tarantino or http://code.google.com/p/simplescriptrunner/)
The way I use it is to have a SQL scripts project in your .sln. You also have a db_next database locally which you make your changes to (using Management Studio or NHibernate Schema Export or LinqToSql CreateDatabase or something). Then you execute migrationscriptgenerator with the _dev and _next DBs, which creates. the SQL update scripts for migrating across.
For oracle database we use oracle-ddl2svn tools.
This tool automated next process
for every db scheme get scheme ddls
put it under version contol
changes between instances resolved manually