I have a Joomla 1.5 based intranet system. User load on this system vary between 100-6k. We have customized this system a lot and using Joomla core functionalities only. I know that Joomla 1.5 is an outdated system and need to upgrade to the newer version or at least 1.5.26. But, we have upgraded PHP and apache and managed the security vulnerabilites. So, upgrading Joomla is not an issue in this question.
We have seen that Joomla is using jos_session table to manage session in the application. Update query is getting executed in jos_session table on each click of every user. Due to this, I can see that multiple insert (for new session) update query (update the timestamp) on jos_session is getting executed.
Is there any specific reason about why Joomla doing this?
Is there any better alternative solution to get rid of multiple insert / update on a single table?
Joomla implements its own custom session management (instead of using the one provided by PHP). And it's session management is implemented via a database table.
Each time a user loads your page a session is created or updated (if already exists). This is why you see all the activity on that table.
By default Joomla would use an MyISAM engine for job_session table. This would cause many table crashes for me so I had to change it to InnoDB and that fixed the crashes issue. It may also improve performance, but cannot comment on that.
In the Joomla back end: Site -> Global Configuration -> System you can set the Session Handler to 'None' and this will stop using the database and it will default to using PHP sessions. However not everybody recommends you do this. You could do your own tests and see what works better for you.
Of course, make sure you have a backup for your website before starting to make tests with the settings.
Related
Hi I'm using CiviCRM for membership management. The database is not set up by me and it's old. I wonder where can I find out the database update frequency. I access the database via phpMyAdmin.
I had a look on the information schema but didn't see anything useful.
thanks!
if you mean code updates, which may impact db, civicrm.org/blog/tags/release
if you mean other minimum requirements (php and mysql) try here docs.civicrm.org/installation/en/latest/general/requirements
If you want to see the version of CiviCRM installed, you can see it in the column version of the table civicrm_domain :
SELECT version FROM civicrm_domain;
Since CiviCRM 5+, there is exactly one release every month. As we are at 5.50, it's quite easy to know how old is your CiviCRM. Also see here for the list of releases : https://civicrm.org/blog/tags/release
CiviCRM is not a standard module / plugins, it's more of it's own software that can be integrated with the CMS (Drupal / WordPress). There is no auto update so you need to do it manually using this procedure depending on the CMS :
https://docs.civicrm.org/sysadmin/en/latest/upgrade/drupal7/
https://docs.civicrm.org/sysadmin/en/latest/upgrade/drupal8/
https://docs.civicrm.org/sysadmin/en/latest/upgrade/wordpress/
https://docs.civicrm.org/sysadmin/en/latest/upgrade/joomla/
Lastly, if you have more questions on your CiviCRM, there is a specific StackExchange here : https://civicrm.stackexchange.com/
This is a long story and I am a little bit stack, I have tried many things and I was able to move forward, question is what now?
This is the full story:
I started working in a .net core project, 2.1. I installed for that visual studio 2019 and other tools. The important thing is that I installed SQL Server 2017 developers edition (the free one) with the default parameters, that version created an instance called MSSQLServer. Unfortunately, the project needed a different instance name which was MSSQL2017, so I tried to change the name of the instance, I couldn't because it is a free version, reinstalling it did not work either and a few other things that I tried, the important one is that a colleague changed the default sql string to make it compatible with my installation, in order to see if the problem was the setup or something else. It worked, and the tables and database was created for that project. So I managed to create another instance calling it with the proper name MSSQL2017, created the users and so on. When I go to Ms SQL Server Manager Studio, I notice that the tables are not created, so I run profile and run the project again, and this is what I get 'Cannot insert duplicate key row in object 'sys.syssingleobjrefs' with unique index 'clst'. The duplicate key value is (67439, 76, 101).' and that's when I am lost, I can't find what sys.syssingleobjrefs refers to so I have no idea how to move on to fix this mess. Any help?
update: so sys.syssingleobjrefs is a system base table, that I can't see its content, how do I modify it?
select * from sys.syssingleobjrefs does not work
syssingleobjrefs is a system table accessible only through Administrator mode.
You have to use sqlcmd -A in order to access this table.
https://learn.microsoft.com/en-us/sql/relational-databases/system-tables/system-base-tables?view=sql-server-ver15
I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.
We added some more functionality and database tables in existing drupal site, the site already is in live and having few registered users and their settings. Now we want to add these new functionality to the existing one without affecting the existing data in mysql database.
I am scared to update the new database because there might be chance to delete the existing data in our database. How can we do this?
Modules can not only create tables, but also alter tables and can install records in existing tables (variables, menu items, enabled/disabled flag in the system table, etc...) so while there are 40 new tables there are likely multiple changes/records somewhere in the other 90 tables as well.
I recommend taking #MikePurcell's advice and making a back-up of the existing production database (and module files if you are applying any updates to modules that are on both the production and development sites), installing the new modules, and testing to make sure everything is still working properly. Unfortunately if you've customized those modules you'll need to re-apply your customizations.
I'm in the process of setting up a new WordPress 3.0 multisite instance and would like to use Sphinx on the database server to power search for the primary website. Ideally, this primary site would offer the ability to search against its content (posts, pages, comments, member profiles, activity updates, etc.) as well as all of the other sites that are a part of the network. Because we'll be adding new sites to the network on a regular basis, I'd like to be able to dynamically add those newly generated tables to the Sphinx .conf file (instead of editing the file and reindexing every time we add a new site).
Unfortunately, MySQL doesn't seem to support wildcards when specifying the table(s) in a query string. The best solution I've come across for grabbing a dynamic set of tables is grepping but I'm pretty certain I don't know how to do this within the .conf file (unless it's possible through magical sorcery).
Is it possible to dynamically specify tables to add to the Sphinx index? Or is this going to cause such performance issues that I'm using the wrong tool?
You could try to dynamically modify the .conf file instead.
You could query from a MySQL view that aggregates the many tables. You'd have to recreate the view with each change to the list of blogs, but I believe that all the hooks exist to support that and it should be easy enough to construct the view query.
The bigger problem may be in trying to find a suitable unique record ID for the posts in Sphinx. It has to be a straight INT, but the post IDs from the different blogs will collide with each other.
I think you can create triggers (INSERT/UPDATE/DELETE) in MySQL on the interested tables (e.g. posts, comments etc) and migrate the data to centralized global tables that are indexed by Sphinx in real time.
The point is how you can create those triggers automatically? Either you can run a cron job to scan for new tables in MySQL, or I believe you can write a simple Wordpress plugin that hook when a blog is activated.