I'd like to see more than 500 recent changes on a MediaWiki installation (through Special:RecentChanges). According to this thread this limit is hardcoded in includes/specials/SpecialRecentchanges.php.
I don't want to meddle with MediaWiki core, is there a way to get more than 500 changes without changing SpecialRecentChanges.php?
If the only way to achieve this is by changing SpecialRecentChanges.php, does it suffice to increase this number and what are the possible side effects?
You can change the proposed values with the parameter $wgRCLinkLimits, e.g. with the value $wgRCLinkLimits = array( 50, 100, 250, 500, 1000, 2000 ); in your LocalSettings.php.
As you saw, there is a hardcoded limit in includes/specials/SpecialRecentchanges.php; it was 500 before MediaWiki 1.16 and it is 5000 since (MediaWiki 1.16 was released in 2010 and is no more supported), and it is not possible to display more entries than this number (without patching MediaWiki).
Related
I am trying to reproduce locally on my computer what I get running mirbase on their website using BLAST. The 'search sequences' option is: mature miRNAs which I had downloaded on my computer and make it as a BLAST database with command:
./makeblastdb -in /home/marianoavino/Downloads/mature.fa -dbtype 'nucl' -out /home/marianoavino/Downloads/mature
then on mirbase I see they use an e-value of 10, which I leave locally.
On mirbase at the end of the analysis they give you these parameter setting:
Search parameters
Search algorithm:
BLASTN
Sequence database:
mature
Evalue cutoff:
10
Max alignments:
100
Word size:
4
Match score:
+5
Mismatch penalty:
-4
and this is the command line I use on my computer for BLAST
./blastn -db /home/marianoavino/Downloads/mature -evalue 10 -word_size 4 -query /home/marianoavino/Downloads/testinputblast.fasta -task "blastn" -out /home/marianoavino/Downloads/testBLast.out
The results of the two analysis are different, with mirbase finding much more stuff than local BLAST.
Do you have any idea on which parameters I should use on local blast command line to match those listed mirbase parameters in order to get the same answer?
There can be lots of reasons for different results including the version of blast you are using and which they used, parameters (like you said) and differences in the databases (remember, database size is used to calculate things like evalue, so you may end up with different results).
Exact replication of results may be difficult, but the question is are the differences meaningful? Just because an alignment has some evalue (which 10 is unusually high) does not mean it is meaningful. For a given sequence, if searches are yielding different number of alignments, but the same number of high quality alignments (high bitscore, low evalue, full alignment between query and subject sequences), does it matter?
I would try and compare results to see where these differences are, then move forward
In Jmeter v2.13, is there a way to capture Throughput via non-GUI/Command Line mode?
I have the jmeter.properties file configured to output via the Summariser and I'm also outputting another [more detailed] .csv results file.
call ..\..\binaries\apache-jmeter-2.13\bin\jmeter -n -t "API Performance.jmx" -l "performanceDetailedResults.csv"
The performanceDetailedResults.csv file provides:
timeStamp
elapsed time
responseCode
responseMessage
threadName
success
failureMessage
bytes sent
grpThreads
allThreads
Latency
However, no amount of tweaking the .properties file or the test itself seems to provide Throuput results like I get via the GUI's Summary Report's Save Table Data button.
All articles, postings, and blogs seem to indicate it wasn't possible without manual manipulation in a spreadsheet. But I'm hoping someone out there has figured out a way to do this with no, or minimal, manual manipulation as the client doesn't want to have to manually calculate the Throughput value each time.
It is calculated by JMeter Listeners so it isn't something you can enable via properties files. Same applies to other metrics which are calculated like:
Average response time
50, 90, 95, and 99 percentiles
Standard Deviation
Basically throughput is calculated as simple as dividing total number of requests by elapsed time.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time)
Hopefully it won't be too hard for you.
References:
Glossary #1
Glossary #2
Did you take a look at JMeter-Plugins?
This tool can generate aggregate report through commandline.
I am fairly new to both Kafka and Spark and trying to write a job (either Streaming or batch). I would like to read from Kafka a predefined number of messages (say x), process the collection through workers and then only start working on the next set of x messages. Basically each message in Kafka is 10 KB and I want to put 2 GB worth of messages in a single S3 file.
So is there any way of specifying the number of messages that the receiver fetches?
I have read that I can specify 'from offset' while creating DStream, but this use case is somewhat different. I need to be able to specify both 'from offset' and 'to offset'.
There's no way to set ending offset as the initial parameter (as you can for starting offset), but
you can use createDirectStream (the fourth overloaded version in the listing) which gives you the ability to get the offsets of the current micro batch using HasOffsetRanges (which gives you back OffsetRange).
That means that you'll have to compare values that you get from OffsetRange with your ending offset in every micro batch in order to see where you are and when to stop consuming from Kafka.
I guess you also need to think about the fact that each partition has its sequential offset. I assume it would be easiest if you could go a bit over 2GB, as much as it takes to finish the current micro-batch (could be couple of kB, depending on density of your messages), in order to avoid splitting the last batch on consumed and unconsumed part, which may require you to fiddle with offsets that Spark keeps in order to track what's consumed and what isn't.
Hope this helps.
I have a graph that displays the count of files in a directory. I need the exact number of files in the graph but I cannot find that feature in the Zabbix configuration.
Any suggestions?
I've just found answer.
We need to change ZBX_UNITS_ROUNDOFF_UPPER_LIMIT in /include/defines.inc.php - It is responsible for number of digits after comma, when value is greater than roundoff threshold. By default it is 2.
Is there a way to implement a server-side pagination mechanism for a HTML table (no Javascript, AJAX, etc...)?
Yes, there is a mechanism. Since, you mentioned no server-side language, we can provide you the logic.
Lets say there are 941 rows in the table and the page size is 10. So, total pages would be 95 (941 / 10 = 94.1 and ceiling of 94.1 is 95).
For the default page (page number: 1), provide the first 10 rows and give the first number in the layout the pressed button look.
1 2 3 ... 95
When a user clicks on any number, behind the scene the server receives the corresponding page number, say i. Accordingly, the query is executed to fetch the (i-1)*10+1th, (i-1)*10+2th, ... , i*10th rows. Implement it manually or use the page function of the server-side language.