I have a problem with memcached.
I have the following code:
/**
* Load the char object
* #param char_id id char
* #return $char object
*/
function get_info( $char_id )
{
$cache = Cache::instance();
$cachetag = Kohana::config( 'medeur.environment' ) . '-charinfo_' . $char_id . '_obj' ;
kohana::log('debug', "-> Getting $cachetag from CACHE..." );
$char = $cache -> get( $cachetag );
if ( is_null( $char ) )
{
kohana::log('debug', "-> Getting $cachetag from DB.");
$char = ORM::factory('character', $char_id );
if ( !$char -> loaded )
$char = null;
$cache -> set( $cachetag, $char, 3600 );
}
return $char;
}
I see in the logfile that the object $char is taken from the cache:
2012-12-08 18:24:07 +01:00 --- debug: -> Getting test-global_adminmessage from CACHE...
2012-12-08 18:24:07 +01:00 --- debug: -> Getting test-charinfo_1_obj from CACHE...
However i keep seeing in the profiler table that i am still going on the database:
SELECT `characters`.* FROM (`characters`) WHERE `characters`.`id` = 1 ORDER BY `characters`.`id` ASC LIMIT 0, 1
Why? in this case, the memcached it would be useless...
Your "Getting nnnn from CACHE..." logging statement will always show up, regardless of whether or not you actually retrieve anything from the cache. Consider moving it into an else statement after the large if block.
if(is_null($char)){
....
}
else {
kohana::log('debug', "-> Got $cachetag from CACHE..." );
}
I checked with the guys at Kohana. Kohana 2.x ORM class is not cacheable. It is cacheable on framework version 3.x
Related
I have a .csv file with more than 690 000 rows.
I found a solution to import data that works very well but it's a little bit slow... (around 100 records every 3 seconds = 63 hours !!).
How can I improve my code to make it faster ?
I do the import via a console command.
Also, I would like to import only prescribers that aren't already in database (to save time). To complicate things, no field is really unique (except for id).
Two prescribers can have the same lastname, firstname, live in the same city and have the same RPPS and professional codes. But, it's the combination of these 6 fields which makes them unique !
That's why I check on every field before create a new one.
<?php
namespace AppBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Helper\ProgressBar;
use AppBundle\Entity\Prescriber;
class PrescribersImportCommand extends ContainerAwareCommand
{
protected function configure()
{
$this
// the name of the command (the part after "bin/console")
->setName('import:prescribers')
->setDescription('Import prescribers from .csv file')
;
}
protected function execute(InputInterface $input, OutputInterface $output)
{
// Show when the script is launched
$now = new \DateTime();
$output->writeln('<comment>Start : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
// Import CSV on DB via Doctrine ORM
$this->import($input, $output);
// Show when the script is over
$now = new \DateTime();
$output->writeln('<comment>End : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
}
protected function import(InputInterface $input, OutputInterface $output)
{
$em = $this->getContainer()->get('doctrine')->getManager();
// Turning off doctrine default logs queries for saving memory
$em->getConnection()->getConfiguration()->setSQLLogger(null);
// Get php array of data from CSV
$data = $this->getData();
// Start progress
$size = count($data);
$progress = new ProgressBar($output, $size);
$progress->start();
// Processing on each row of data
$batchSize = 100; # frequency for persisting the data
$i = 1; # current index of records
foreach($data as $row) {
$p = $em->getRepository('AppBundle:Prescriber')->findOneBy(array(
'rpps' => $row['rpps'],
'lastname' => $row['nom'],
'firstname' => $row['prenom'],
'profCode' => $row['code_prof'],
'postalCode' => $row['code_postal'],
'city' => $row['ville'],
));
# If the prescriber doest not exist we create one
if(!is_object($p)){
$p = new Prescriber();
$p->setRpps($row['rpps']);
$p->setLastname($row['nom']);
$p->setFirstname($row['prenom']);
$p->setProfCode($row['code_prof']);
$p->setPostalCode($row['code_postal']);
$p->setCity($row['ville']);
$em->persist($p);
}
# flush each 100 prescribers persisted
if (($i % $batchSize) === 0) {
$em->flush();
$em->clear(); // Detaches all objects from Doctrine!
// Advancing for progress display on console
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Flushing and clear data on queue
$em->flush();
$em->clear();
// Ending the progress bar process
$progress->finish();
}
protected function getData()
{
// Getting the CSV from filesystem
$fileName = 'web/docs/prescripteurs.csv';
// Using service for converting CSV to PHP Array
$converter = $this->getContainer()->get('app.csvtoarray_converter');
$data = $converter->convert($fileName);
return $data;
}
}
EDIT
According to #Jake N answer, here is the final code.
It's very very faster ! 10 minutes to import 653 727 / 693 230 rows (39 503 duplicate items!)
1) Add two columns in my table : created_at and updated_at
2) Add a single index of type UNIQUE on every column of my table (except id and dates) to prevent duplicate items with phpMyAdmin.
3) Add ON DUPLICATE KEY UPDATE in my query, to update just the updated_at column.
foreach($data as $row) {
$sql = "INSERT INTO prescripteurs (rpps, nom, prenom, code_prof, code_postal, ville)
VALUES(:rpps, :nom, :prenom, :codeprof, :cp, :ville)
ON DUPLICATE KEY UPDATE updated_at = NOW()";
$stmt = $em->getConnection()->prepare($sql);
$r = $stmt->execute(array(
'rpps' => $row['rpps'],
'nom' => $row['nom'],
'prenom' => $row['prenom'],
'codeprof' => $row['code_prof'],
'cp' => $row['code_postal'],
'ville' => $row['ville'],
));
if (!$r) {
$progress->clear();
$output->writeln('<comment>An error occured.</comment>');
$progress->display();
} elseif (($i % $batchSize) === 0) {
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Ending the progress bar process
$progress->finish();
1. Don't use Doctrine
Try to not use Doctrine if you can, it eats memory and as you have found is slow. Try and use just raw SQL for the import with simple INSERT statements:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES ('Hello', 'World', '1');
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute();
Or you can prepare the statement with values:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES (:label, :code, :hidden);
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute(['label' => 'Hello', 'code' => 'World', 'hidden' => 1);
Untested code, but it should get you started as this is how I have done it before.
2. Index
Also, for your checks, have you got an index on all those fields? So that the lookup is as quick as possible.
I've installed MediaWiki 1.27.1 and managed to get a basic single-wiki setup working. Now I'm trying to change it into a wiki family according to the instructions at Manual:Wiki_family. Here's what I'm aiming for:
wiki.mysite.com/ #public wiki
wiki.mysite.com/priv1 #private wiki 1
wiki.mysite.com/priv2 #private wiki 2
The result I got:
wiki.mysite.com shows the public wiki, as expected.
wiki.mysite.com/priv1 returns 404
wiki.mysite.com/priv2 returns 404
[edit: added message text] The text of the 404 message:
Not Found
The requested document was not found on this server.
Web Server at mysite.com
What I did:
I followed the steps outlined in the manual, generated the 3 copies of LocalSettings.php, and renamed them. I then modified the main LocalSettings.php to the following:
<?php
if ( !defined( 'MEDIAWIKI' ) ) {
exit;
}
## Database settings - cut-and-pasted out from the 3 sub-wikis' LocalSettings.php
$wgDBtype = "mysql";
$wgDBserver = "localhost";
$wgDBname = "db_wiki";
$wgDBuser = "db_wiki_user";
$wgDBpassword = "password";
$callingurl = strtolower( $_SERVER['REQUEST_URI'] ); // get the calling url
if ( strpos( $callingurl, '/priv1' ) === 0 ) {
require_once 'LocalSettings_priv1.php';
} elseif ( strpos( $callingurl, '/priv2' ) === 0 ) {
require_once 'LocalSettings_priv2.php';
} elseif ( strpos( $callingurl, '/' ) === 0 ) {
require_once 'LocalSettings_public.php';
} else {
header( 'HTTP/1.1 404 Not Found' );
echo "This wiki (\"" . htmlspecialchars( $callingurl ) . "\") is not available. Check configuration.";
exit( 0 );
}
For test, I've tried changing the file name in the require_once 'localsettings_public.php' line to each sub-wiki's LocalSettings.php file; when I open wiki.mysite.com, the related sub-wiki does get shown correctly. The URLs with the subdirectory path continue to return 404, however.
Any idea what's wrong with my setup?
This is a reported bug in Perl 6: X::AdHoc instead of X::TypeCheck::Binding with subset parameter, first reported in November 2015.
While playing with my Perl 6 module Chemisty::Elements, I've run into an Exception issue I didn't expect.
I define a type, ZInt, which limits numbers to the ordinal numbers found on the periodic chart (which I've faked a bit here). I then use that type to constrain a parameter to a subroutine. I expected to get some sort of X::TypeCheck, but I get X::AdHoc instead:
use v6;
subset ZInt of Cool is export where {
state ( $min, $max ) = <1 120>;
( $_.truncate == $_ and $min <= $_ <= $max )
or warn "Z must be between a positive whole number from $min to $max. Got <$_>."
};
sub foo ( ZInt $Z ) { say $Z }
try {
CATCH {
default { .^name.say }
}
foo( 156 );
}
First, I get the warning twice, which is weird:
Z must be between a positive whole number from 1 to 120. Got <156>. in block at zint.p6 line 5
Z must be between a positive whole number from 1 to 120. Got <156>. in block at zint.p6 line 5
X::AdHoc
But, I get the X::AdHoc type when I'd rather people knew it was a type error.
I checked what would happen without the warn and got X::AdHoc again:
subset ZInt of Cool is export where {
state ( $min, $max ) = <1 120>;
( $_.truncate == $_ and $min <= $_ <= $max )
};
So, I figured I could throw my own exception:
subset ZInt of Cool is export where {
state ( $min, $max ) = <1 120>;
( $_.truncate == $_ and $min <= $_ <= $max )
or X::TypeCheck.new.throw;
};
But, I get a warning:
Use of uninitialized value of type Any in string context
Any of .^name, .perl, .gist, or .say can stringify undefined things, if needed.
At this point I don't know what's complaining. I figure one of those methods expects something I'm not supplying but I don't see anything about parameters for new or throw in the docs.
How do I get the type I want without the warning, along with my custom text?
Don't throw the exception or warn with one. Instead, you want to fail:
subset ZInt of Cool is export where {
state ( $min, $max ) = <1 120>;
( $_.truncate == $_ and $min <= $_ <= $max )
or fail "Z must be between a positive whole number from $min to $max. Got <$_>."
};
I believe that's your intent. Failing with your own exception is fine too, but X::TypeCheck has a bug in it. It should either require "operation" or provide a reasonable default as it does for "got" and "expected".
subset ZInt of Cool is export where {
state ( $min, $max ) = <1 120>;
( $_.truncate == $_ and $min <= $_ <= $max )
or fail X::TypeCheck.new(
operation => "type check",
expected => ::('ZInt'),
got => $_,
);
};
You could pass --ll-exception and try to figure out how exactly you end up with the errors and messages you got, but I'm not sure how helpful that will be.
As to the warning about use of an uninitialzed value: You need to provide a named operation argument to X::TypeCheck.new; other arguments you may provide are got and expected, cf core/Exception.pm.
It is however a Bad Idea to throw from a subset declaration as any smartmatch against that particular type will now explode. A slightly better idea would be to .fail the exception, but that still doesn't feel right to me: Not being a member of a subset type is not an exceptional condition.
Alternatively, you could provide a multi candidate that does the dying:
subset ZInt of Cool where $_ %% 1 && $_ ~~ 1..120;
proto foo($) {*}
multi foo(ZInt $Z) { say $Z }
multi foo($Z) {
die X::TypeCheck.new(
operation => 'foo',
got => $Z,
expected => ZInt
);
}
That still has issues if you provide an argument like "hello" that fails on numeric conversion as %% will throw instead of propagating the failure, which could be considered a defect with the Rakudo core setting.
You can work around that one via things like
subset ZInt of Cool where { try $_ %% 1 && $_ ~~ 1..120 }
or
subset ZInt of Cool where { .Numeric andthen $_ %% 1 && $_ ~~ 1..120 }
The whole interaction of argument type checking, subsets or where-clauses, failures and exceptions can be somewhat brittle, so you may want to experiment a bit until you arrive at semantics and behaviour you like.
Another approach would be doing a coercion from Cool to Int with a separate range check:
subset ZInt of Int where 1..120 ;
sub foo(Int(Cool) $Z where ZInt) {
say $Z.perl;
}
In an ideal world, there should be some way to express this with a coercing type constraint like ZInt(Cool).
I am trying to insert to a table in bulk of 100 ( i heard it's the best size to use with mySQL), i use scala 2.10.4 with sbt 0.13.6 and the jdbc framework i am using is scalikejdbc with Hikaricp , my connection settings look like this:
val dataSource: DataSource = {
val ds = new HikariDataSource()
ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
ds.addDataSourceProperty("autoCommit", "false")
ds.addDataSourceProperty("user", "someUser")
ds.addDataSourceProperty("password", "not my password")
ds
}
ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))
The insert code:
try {
implicit val session = AutoSession
val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
.
.
.
for(rev<reviews){
paramList += Seq[(Symbol, Any)](
'review_id -> rev.review_idx,
'text -> rev.text,
'category_id -> rev.category_id,
'aspect_id -> aspectId,
'not_aspect -> noAspect /*0*/ ,
'certainty_aspect -> rev.certainty_aspect,
'sentiment -> rev.sentiment,
'sentiment_grade -> rev.certainty_sentiment,
'stars -> rev.stars
)
}
.
.
.
try {
if (paramList != null && paramList.length > 0) {
val result = NamedDB('review) localTx { implicit session =>
sql"""INSERT INTO `MasterFlow`.`classifier_results`
(
`review_id`,
`text`,
`category_id`,
`aspect_id`,
`not_aspect`,
`certainty_aspect`,
`sentiment`,
`sentiment_grade`,
`stars`)
VALUES
( {review_id}, {text}, {category_id}, {aspect_id},
{not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
"""
.batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/
.apply()
}
Each time i insert a batch it took 15 seconds, my logs:
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[default-akka.actor.default-dispatcher-3] StatementExecutor$$anon$1 - SQL execution completed
[SQL Execution]
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
.
.
.
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
... (total: 100 times); (15466 ms)
[Stack Trace]
...
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:119)
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:96)
scalikejdbc.DBConnection$$anonfun$_localTx$1$1.apply(DBConnection.scala:252)
scala.util.control.Exception$Catch.apply(Exception.scala:102)
scalikejdbc.DBConnection$class._localTx$1(DBConnection.scala:250)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
scalikejdbc.NamedDB.using(NamedDB.scala:32)
scalikejdbc.DBConnection$class.localTx(DBConnection.scala:257)
scalikejdbc.NamedDB.localTx(NamedDB.scala:32)
logic.DB.ClassifierJsonToDB$.insertBulk(ClassifierJsonToDB.scala:96)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:176)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:167)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
...
When i run it on the server that host the mySQL database it's run fast, what can i do to make it run faster on a remote computer ?
In case if any one need that, I had similar problem to batch insert 10000 records into MySQL with ScalikeJdbc, and it could be solved by setting rewriteBatchedStatements to true in jdbc url ("jdbc:mysql://host:3306/db?rewriteBatchedStatements=true"). It reduced the batch insert time from 40 seconds to 1 second!
I guess this is not an issue of ScalikeJDBC or HikariCP. You should investigate your network environment between your machine and MySQL server.
After I've updated my Wordpress install to 3.9, I keep getting these errors:
Warning: mysql_query(): Access denied for user 'www-data'#'localhost' (using password: NO) in /home/sites/wordpress/site/wp-content/plugins/crm/main.php on line 20
Warning: mysql_query(): A link to the server could not be established in /home/sites/wordpress/site/wp-content/plugins/crm/main.php on line 20
Warning: mysql_fetch_row() expects parameter 1 to be resource, boolean given in /home/sites/wordpress/site/wp-content/plugins/crm/main.php on line 21
I can't quite figure out what's wrong. Here's the code that worked pre-3.9:
<?php
session_start();
/**
* Plugin Name: CRM
* Description:
* Version:
* Author:
*
*/
add_action( 'admin_menu', 'menu' );
function menu() {
add_menu_page( 'CRM', 'CRM', 3,'form', 'form' );
}
function form() {
global $wpdb,$current_user,$user_ID;
echo "<h3>CRM</h3>";
$count = mysql_query("SELECT COUNT(id) FROM user_form_data");
$nume2 = mysql_fetch_row($count);
$nume = $nume2[0];
I've snipped the rest, as it does not seem relevant for the error :)
SOLUTION:
Found it.
The error was in the 3.9 upgrade.
http://make.wordpress.org/core/2014/04/07/mysql-in-wordpress-3-9/
"In WordPress 3.9, we added an extra layer to WPDB, causing it to switch to using the mysqli PHP library, when using PHP 5.5 or higher.
For plugin developers, this means that you absolutely shouldn’t be using PHP’s mysql_*() functions any more – you can use the equivalent WPDB functions instead."
You should read this post http://make.wordpress.org/core/2014/04/07/mysql-in-wordpress-3-9/
In WordPress 3.9, we added an extra layer to WPDB, causing it to switch to using the mysqli PHP library, when using PHP 5.5 or higher.
For plugin developers, this means that you absolutely shouldn’t be using PHP’s mysql_*() functions any more – you can use the equivalent WPDB functions instead.
Change this to wp_results
$count = mysql_query("SELECT COUNT(id) FROM user_form_data");
$nume2 = mysql_fetch_row($count);
to
$count = $wpdb->get_results("SELECT COUNT(id) FROM user_form_data",ARRAY_A);
$nume2 = $wpdb->num_rows; ====== it will return same as mysql_fetch_row
Try this hope this help
<?php
/**
* Plugin Name: CRM
* Description: any desc
* Author: ABS
*
*/
add_action( 'admin_menu', 'user_data_menu' );
function user_data_menu() {
add_menu_page( 'CRM', 'CRM', 3,'user_data_form', 'user_data_form' );
}
function user_data_form() {
#session_start();
global $wpdb,$current_user,$user_ID;
echo "<h3>CRM</h3>";
$count = mysql_query("SELECT COUNT(id) FROM user_form_data");
$nume2 = mysql_fetch_row($count);
$nume = $nume2[0];
if ( $limit < $nume && empty($_POST['searching']) && empty($_POST['filter_flag']) && empty($_POST['rowsselect']) ) {
$start = $_GET['start'];
$eu = ($start - 0);
$limit = 20;
$this4 = $eu + $limit;
$back = $eu - $limit;
$next = $eu + $limit;
}
} ?>
It looks like that the update changed the mysql username and password. So the problem isn't the code.
Check the wp-config.php file if these settings are changed and incorrect