Why is the error "Script error: No such module "en-headword"" even if this module is already imported in my MediaWiki? - mediawiki

I have downloaded the xml containing the entry tease out along with all its templates and modules from the link https://en.wiktionary.org/w/index.php?title=Special:Export&pages=tease_out&templates.
Clearly, this xml contains module en-headword.
Then I import this xml with
php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\importDump.php C:/Users/Akira/Desktop/tease_out.xml
php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\rebuildrecentchanges.php
php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\initSiteStats.php
Then it shows on my cmd that
C:\Users\Akira>php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\importDump.php C:/Users/Akira/Desktop/tease_out.xml
Done!
You might want to run rebuildrecentchanges.php to regenerate RecentChanges,
and initSiteStats.php to update page and revision counts
C:\Users\Akira>php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\rebuildrecentchanges.php
Rebuilding $wgRCMaxAge=7776000 seconds (90 days)
Clearing recentchanges table for time range...
Loading from page and revision tables...
Inserting from page and revision tables...
Updating links and size differences...
Loading from user and logging tables...
Flagging bot account edits...
Flagging auto-patrolled edits...
Removing duplicate revision and logging entries...
Deleting feed timestamps.
Done.
C:\Users\Akira>php C:\Bitnami\mediawiki-1.35.2-0\apps\mediawiki\htdocs\maintenance\initSiteStats.php
Refresh Site Statistics
Counting total edits...52
Counting number of articles...1
Counting total pages...45
Counting number of users...2
Counting number of images...0
To update the site statistics table, run the script with the --update option.
Done.
Could you please explain why the content is not rendered properly?

Related

Facing trouble with Sphinx Indexer Update

I have sphinx (version Sphinx 2.0.5-release ) running on my server.
Recently faced a problem with indexing.
I have a cron set which runs every 2 hours and rotates indexers.
The problem was my Database Fields got the updated data. But their corresponding indexers failed to get the updated data. My sphinx was running and not stopped.
Is there any way to check if the indexer is updated ? Or the time it was last updated ? So that I can notify myself after every indexer rotation that the indexer is updated perfectly ?
Is there any way to check if the indexer is updated ?
Can capture the output of indexer, and pipe it to a log file.
3 */2 * * * * indexer | ts >> /var/log/indexer
Or the time it was last updated ?
Well can check the file dates of the index files to check when the index was regenerated. This is useful to show that for example indexer created the indexes (with .new. in filename) but searchd hasnt loaded them.
So that I can notify myself after every indexer rotation that the indexer is updated perfectly ?
generally using the email feature of cron is pretty good. http://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/
Use --quiet with indexer, so it only outputs errors.
... but if you want something that can check the integrity, would need something application specific. Something that understands the content of data. Eg if there is is a index that updates continouslly, could run a a sphinx query, and check there are records in the last few hours.
Or run a database query, and a sphinx query, and compare the output. Run this script regulally via cron too.

How to skip updating data of a particular table from django initial_data.json file on each south migration?

I have a initial_data.json file in my project which is loaded to the database initialy using syncdb.
Now every time when i migrate(south migration) the app, old initial_data gets updated in the table.
I dont want every tables to get updated again, since some of the columns in the table are modified by user. I need to skip those tables,any solutions.

How to do updates on all rows at every button click

I have a python app that has an admin dashboard.
There I have a button called "Update DB".
(The app uses MySQL and SQLAlchemy)
Once it's clicked it makes an API call and gets a list of data and writes that to the DB, and if there are new records returned by the API call it adds them and does not duplicate currently existing records.
However if API call returns less items, it does not delete them.
Since I don't even have a "starting to google" point I need some guidance of what type of SQL query should my app be making.
Like once button is clicked ,it needs to go through all the rows:
do the changes to the updated records that existed
add new ones if there are any returned by the API call
delete ones that API call did not return.
What is this operation called or how can I accomplish this in mysql?
Once I find out about this I'll see how can I do that in SQLAlchemy.
You may want to set a timestamp column to the time of latest action on the table and have a background thread remove old rows as another action. I don't know of any atomic action that may perform the desired data reformation. Another option might be satisfactory is to write the replacement batch to a staging table, rename both versions (swap) and drop the old table. HTH

Logging errors in SSIS

I have a ssis project with 3 ssis packages, one is a parent package which calls the other 2 packages based on some condition. In the parent package I have a foreach loop container which will read multiple .csv files from a location and based on the file name one of the two child packages will be executed and the data is uploaded into the tables present in MS SQL Server 2008. Since multiple files are read, if any of the file generates an error in the the child packages, I have to log the details of error (like the filename, error message, row number etc) in a custom database table, delete all the records that got uploaded in the table and read the next file and the package should not stop for the files which are valid and doesn't generate any error when they are read.
Say if a file has 100 rows and there is a problem at row number 50, then we need to log the error details in a table, delete rows 1 to 49 which got uploaded in the database table and the package to start executing the next file.
How can I achieve this in SSIS?
You will have to set TransactionOption=*Required* on your foreach loop container and TransactionOption=*Supported* on the control flow items within it. This will allow for your transactions to be rolled back if any complications happen in your child packages. More information on 'TransactionOption' property can be found # http://msdn.microsoft.com/en-us/library/ms137690.aspx
Custom logging can be performed within the child packages by redirecting the error output of your destination to your preferred error destination. However, this redirection logging only occurs on insertion errors. So if you wish to catch errors that occur anywhere in your child package, you will have to set up an 'OnError' event handler or utilize the built-in error logging for SSIS (SSIS -> Logging..)
I suggest you try the creation of two dataflows in your loop container. The main idea here is to have a set of three tables to better and more easily handle the error situations. In the same flow you do the following:
1st dataflow:
Should read .csv file and load data to a temp table. If the file is processed with errors you simply truncate the temp table. In addition, you should also configure the flat file source output to redirect the errors to an error log table.
2nd dataflow:
On the other hand, in case of processing error-free, you need to transfer the rows from temp into the destination table. So, here, the OLEDB datasource is "temp table" and the OLEDB destination is "final table".
DonĀ“t forget to truncate the temp table in both cases, as the next file will need an empty table.
Let's break this down a bit.
I assume that you have a data flow that processes an individual file at a time. The data flow would read the input file via a source connection, transform it and then load the data into the destination. You would basically need to implement the Error Handler flow in your transformations by choosing "Redirect Row". Details on the Error Flow are available here: https://learn.microsoft.com/en-us/sql/integration-services/data-flow/error-handling-in-data.
If you need to skip an entire file due to a bad format, you will need to implement a Precedence Constraint for failure on the file system task.
My suggestion would be to get a copy of the exam preparation book for exam 70-463 - it has great practice examples on exactly the kind of scenarios that you have run into.
We do something similar with Excel files
We have an ErrorsFound variable which is reset each time a new file is read within the for each loop.
A script component validates each row of the data and sets the ErrorsFound variable to true if an error is found, and builds up a string containing any error details.
Then - based on the ErrorsFound variable - either the data is imported or the error is recorded in a log table.
It gets a bit more tricky when the Excel files are filled in badly enough for the process not to be able to read them at all - for example when text is entered in a date, number or currency field. In this case we use the OnError Event Handler of the Data flow task to record an error in the log but won't know which row(s) caused the problem

SSIS Incremental Load Error Handling

I have SSIS packages to extract fact tables into the staging tables. I have a control table which contains the last extract date for each table. So, the package extract rows where > control table date. The problem I have is I want to redirect rows with error to an error file in the data flow task of the package. If I do that the package will not fail (so I can't rollback) and some rows might actually go through which if I coninue with the process will ultimately get to my fact table. Now, next time when I run the package if I had updated the control table, I will miss the rows which had erros. If I had not updated the control table with the date, I will re-extract the rows which went through. What is the best practice for this?
How about adding a Row Count Transformation onto the error branch? It sounds like you are using the transaction option in SSIS so put the Data Flow in a sequence container and post Data Flow, evaluate the value of your row count variable. If it's greater than zero, rollback/abort processing.