Does get_post_meta() makes a separate query to the database? - mysql

A question of mine which I had to ask a long time ago.. I am curious if these wordpress functions like get_post_meta makes a sql query to the database or does it is loaded in WP_Query global variable when page is loaded? Thank you

get_post_meta() is a wrapper for get_metadata() and get_metadata() uses the global WP_Object_Cache object.
The relevant code is:
function get_metadata( $meta_type, $object_id, $meta_key = '', $single = false ) {
...
$meta_cache = wp_cache_get( $object_id, $meta_type . '_meta' );
if ( ! $meta_cache ) {
$meta_cache = update_meta_cache( $meta_type, array( $object_id ) );
if ( isset( $meta_cache[ $object_id ] ) ) {
$meta_cache = $meta_cache[ $object_id ];
} else {
$meta_cache = null;
}
}
...
}
where wp_cache_get() is checking the global WP_Object_Cache object $wp_object_cache and update_meta_cache() is updating the global WP_Object_Cache object $wp_object_cache if the data is not in the cache. Of course this update requires a SQL query.
Incidentally, the global WP_Object_Cache object $wp_object_cache is used for much more than post meta data - it is a generic cache and WordPress and plugins uses it for caching values that are expensive to recompute.

Related

Store a complex hash in Apache::Sessions:MySQL session

So, I'm trying to store a decoded JSON object into a tied apache session. This is my code:
$url="https://apilink";
$content = get($url);
die "Can't Get $url" if (! defined $content);
$jsonOb = decode_json($content);
%aprecords = %$jsonOb;
#Push the jsonOb in the session
$session{apirecords} = \%aprecords ;
$session{apirecords} does not store the %aprecords reference. Although, when I substitute the statement to $session{apirecords} = \%jsonOb ; , it stores apirecords in the sessions table but the reference to %jsonOb has no values in it.
PS:
I have tried the following and none of them seem to work:
1) $session{apirecords} = \%$jsonOb ;
2) $session{apirecords} = { %aprecords } ;
JSON object is perfectly well structured.
Code for tying a session:
tie %session, "Apache::Session::MySQL", $sessionID,
{
Handle => $dbObject,
LockHandle => $dbObject,
TableName => 'sessions',
};
#If a session ID doesn't exist, create a new session and get new session ID
if (!defined ($sessionID)){
$sessionID = $session{_session_id};
$session{count}=0;
}
A helping hand would be much much appreciated!
JSON Sample: https://jsonblob.com/feed3bba-f1cd-11e8-9450-2904e8ecf943
As pointed out by GMB. The blob size(64 KB) wasn't big enough for the JSON object.
The solution is to change blob datatype to mediumblob .

Convert datetimes from MySQL table to ISO8601 to create JSON feed in WordPress for use with FullCalendar

I've written an action in WordPress that grabs the rows from a table and encodes them in JSON format, so I can use them with the FullCalendar javascript event calendar.
The date fields from the table need to be formatted ISO8601.
In other words, when the DB renders the date/time: 2017-08-06 10:22:20, I need it converted after the query to: 2017-08-06T10:22:20 for the date fields in the query.
I'm not concerned about timezone offsets.
My function:
add_action( 'getmyevents', 'get_my_events' );
function get_my_events( $atts = [], $content = null ) {
// Use WordPress database functions
global $wpdb;
// List of events will be stored in JSON format
$json = array();
// Query retrieves list of events
$mytable = $wpdb->prefix . "my_events";
$myids = $wpdb->get_results("SELECT * FROM " . $mytable );
// sending the encoded result to success page
echo json_encode( $myids, JSON_UNESCAPED_SLASHES );
// return JSON
return $json;
}
Can someone give me a quick, direct way to convert the datetime strings in the query to ISO8601?
Maybe you can try something like this.
Although I don't know the name of your column. Uncomment the print_r to get the column name.
foreach ($myids as $key => $row) {
// print_r($row);
$date_reformatted = strtotime($row->date_col);
$myids[$key]->date_col = date( 'c', $date_reformatted );
}
It isn't the ideal answer I was looking for, but I did come up with a working solution. Mark's suggestion about filtering during the query gave me the clue I needed for it.
add_action( 'getmyevents', 'get_my_events' );
function get_my_events( $atts = [], $content = null ) {
global $wpdb;
// Values sent via ajax to calendar from my_events table
// List of events
$json = array();
// Query that retrieves events
$mytable = $wpdb->prefix . "my_events";
$myids = $wpdb->get_results( 'SELECT id, title, url, DATE_FORMAT( start, "%Y-%m-%d\T%H:%i:%s" ) as start, DATE_FORMAT( end, "%Y-%m-%d\T%H:%i:%s" ) as end, allDay FROM ' . $mytable );
// sending the encoded result to success page
echo json_encode( $myids, JSON_UNESCAPED_SLASHES );
// return JSON
return $json;
}
However, if someone else can come up with an answer that doesn't require me to specify columns by name, that would be great. Even better would be not formatting within the query at all, but rather formatting afterward. I always like to minimize processor use by MySQL as much as possible.

Laravel 5: How to dump SQL query?

Laravel 5's built-in solution
In Laravel 5+, we can use \DB::getQueryLog() to retrieve all executed queries. Since, query logging is an extensive operation and cause performance issues so it's disabled by default in L5 and only recommend for development environments only. We can enable the query logging by using the method \DB::enableQueryLog(), as mentioned in [Laravel's documentation][1].
Problem in built-in solution
The DB::getQueryLog() function is great but sometimes we wish that it would be great if we get dump in flat SQL format, so we can copy/past it in our favorite MySQL application like phpMyAdmin or Sqlyog to execute it and debug or optimize it.
So, I need a helper function that helps me to produce dump with following additional info:
On which file and line number the dump has called.
Remove back-ticks from the query.
Flat query, so don't need to update binding parameters manually and I can copy/past SQL in phpMyAdmin etc to debug/optimize the query.
Custom Solution
Step 1: Enable Query Logging
Copy/past following block of code on top of route file:
# File: app/Http/routes.php
if (\App::environment( 'local' )) {
\DB::enableQueryLog();
}
Step 2: Add helper function
if (!function_exists( 'dump_query' )) {
function dump_query( $last_query_only=true, $remove_back_ticks=true ) {
// location and line
$caller = debug_backtrace( DEBUG_BACKTRACE_IGNORE_ARGS, 1 );
$info = count( $caller ) ? sprintf( "%s (%d)", $caller[0]['file'], $caller[0]['line'] ) : "*** Unable to parse location info. ***";
// log of executed queries
$logs = DB::getQueryLog();
if ( empty($logs) || !is_array($logs) ) {
$logs = "No SQL query found. *** Make sure you have enabled DB::enableQueryLog() ***";
} else {
$logs = $last_query_only ? array_pop($logs) : $logs;
}
// flatten bindings
if (isset( $logs['query'] ) ) {
$logs['query'] = $remove_back_ticks ? preg_replace( "/`/", "", $logs['query'] ) : $logs['query'];
// updating bindings
$bindings = $logs['bindings'];
if ( !empty($bindings) ) {
$logs['query'] = preg_replace_callback('/\?/', function ( $match ) use (&$bindings) {
return "'". array_shift($bindings) . "'";
}, $logs['query']);
}
}
else foreach($logs as &$log) {
$log['query'] = $remove_back_ticks ? preg_replace( "/`/", "", $log['query'] ) : $log['query'];
// updating bindings
$bindings = $log['bindings'];
if (!empty( $bindings )) {
$log['query'] = preg_replace_callback(
'/\?/', function ( $match ) use ( &$bindings ) {
return "'" . array_shift( $bindings ) . "'";
}, $log['query']
);
}
}
// output
$output = ["*FILE*" => $info,
'*SQL*' => $logs
];
dump( $output );
}
}
How to use?
Take dump of last executed query, use just after the query execution:
dump_query();
Take dump of all executed queries use:
dump_query( false );
On which file and line number the dump has
called.
I don't understand why you need this because you always know where you called the dump function but never mind you have your solution for that.
Remove back-ticks from the query.
You don't need to remove back-ticks as the query will work in MySQL along with them also.
Flat query, so don't need to update binding parameters manually and I can copy/past SQL in phpMyAdmin etc to debug/optimize the query.
You can use vsprintf for binding parameters as:
$queries = DB::getQueryLog();
foreach ($queries as $key => $query) {
$queries[$key]['query'] = vsprintf(str_replace('?', '\'%s\'', $query['query']), $query['bindings']);
}
return $queries;
And I would suggest you to checkout this github repo squareboat/sql-doctor
I was looking for simple solution and the one below worked for me.
DB::enableQueryLog();
User::find(1); //Any Eloquent query
// and then you can get query log
dd(DB::getQueryLog());
Reference Links:
How to Get the Query Executed in Laravel 5? DB::getQueryLog() Returning Empty Array
https://www.codegrepper.com/code-examples/php/dump+sql+query+laravel
Add this code in the top of your routes file.
Laravel 5.2 routes.php
Laravel 5.3+ web.php
<?php
// Display all SQL executed in Eloquent
Event::listen('Illuminate\Database\Events\QueryExecuted', function ($query) {
var_dump($query->sql);
var_dump($query->bindings);
var_dump($query->time);
echo "<br><br><br>";
});
For a Laravel 8 application it could be useful to put the following in the AppServiceProvider.php file:
/**
* Bootstrap any application services.
*
* #return void
*/
public function boot()
{
// [...]
// Dump SQL queries on demand **ONLY IN DEV**
if (env('APP_ENV') === 'local') {
DB::enableQueryLog();
Event::listen(RequestHandled::class, function ($event) {
if ( $event->request->has('sql-debug') ) {
$queries = DB::getQueryLog();
Log::debug($queries);
dump($queries);
}
});
}
// [...]
}
Then appending &sql-debug=1 to the url will dump the queries.

MediaWiki DB connection error while attempting to upgrade to 1.22

I have a MediaWiki installation on a shared host server. It's at version 1.19.1 and I'm trying to update to 1.22.2. The documentation indicates that a one-step update should be OK for this.
I've done this several times for past updates successfully, and am following previous notes. I set up a new directory with 1.22.2 in it, copied LocalSettings.php and /images/ files from the working live directory to the new one. LocalSettings.php has entries for $wgDBuser, $wgDBpassword, $wgDBadminuser and $wgDBadminpassword all defined.
I have command line access to the server, and tried to run the update process in WikiNew, by
php maintenance/update.php
but it responds:
DB connection error: Unknown MySQL server host 'localhost:/tmp/mysql5.sock' (34) (localhost:/tmp/mysql5.sock)
If I do the same in WikiLive it works. Of course it does not do any actual update as I'm updating 1.19.1 to 1.19.1, but the usual type of messages appear but with indications that changes are not required, and it purges caches. LiviWiki, 1.19, still works.
So the same data for the connection string exists in both copies of LocalSettings.php, both copies of maintenance/update.php are accessing the same MySQL database, but one accepts the connection string and the other doesn't.
Has something changed between 1.19 and 1.22? I've looked for 'Configuration changes' in the release notes for 1.20, 1.21, and 1.22, but see no instruction to make any change.
Please help!
Thank you.
For the record, the answer was to change the DB host from
$wgDBserver = "localhost:/tmp/mysql5.sock"
to just
$wgDBserver = "localhost"
The original string should have worked, but there is a bug in MediaWiki 1.19.2, described here:
https://bugzilla.wikimedia.org/show_bug.cgi?id=58153
"The new mysqli adapter in 1.22.0 does not properly implement non-standard MySQL
ports."
I ran into a similar problem with the difference that the connect string for the MySQL server I use was of the form of host:port:socket as in localhost:3306:/var/lib/mysock.
I am attempting to install mediawiki-1.22.6 and the initial database check was failing. Using only localhost did not work for me since the MySQL installation I am using required both a port number and a socket.
I ended up making the following changes to the mediawiki-1.22.6 php scripts in order to allow for the parsing of a connect string of the form of host:port:socket.
WARNING: These changes are specific to my site and environment and the parsing function changes probably will not work properly with other strings for instance if the port number is not specified as in localhost:/var/lib/mysock.
Here are the specific changes I made to complete installation.
In file IP.php in the relative directory of Includes there is a function public static function splitHostAndPort( $both ) which takes the connect string and parses it breaking it up into the necessary parts for the real_connect() call in function protected function mysqlConnect( $realServer ) located in file DatabaseMysqli.php in the relative folder includes/db.
in the function splitHostAndPort() I modified the function to the following:
public static function splitHostAndPort( $both ) {
if ( substr( $both, 0, 1 ) === '[' ) {
if ( preg_match( '/^\[(' . RE_IPV6_ADD . ')\](?::(?P<port>\d+))?$/', $both, $m ) ) {
if ( isset( $m['port'] ) ) {
return array( $m[1], intval( $m['port'] ) );
} else {
return array( $m[1], false );
}
} else {
// Square bracket found but no IPv6
return false;
}
}
$numColons = substr_count( $both, ':' );
if ( $numColons >= 2 ) {
// Is it a bare IPv6 address?
if ( preg_match( '/^' . RE_IPV6_ADD . '$/', $both ) ) {
return array( $both, false );
} else {
// Not valid IPv6, but too many colons for anything else
// may be of the form localhost:port:socket
// return false;
}
}
if ( $numColons >= 1 ) {
// Host:port:socket?
$bits = explode( ':', $both );
if ( preg_match( '/^\d+/', $bits[1] ) ) {
if ($numColons > 1) {
return array( $bits[0], intval( $bits[1] ), $bits[2] );
} else {
return array( $bits[0], intval( $bits[1] ) );
}
} else {
// Not a valid port
return false;
}
}
// Plain hostname
return array( $both, false );
}
Next I modified the mysqlConnect() function so that it performed a check to see if both port and socket are specified as follows:
// Other than mysql_connect, mysqli_real_connect expects an explicit port
// parameter. So we need to parse the port out of $realServer
$socketname = $port = null;
$hostAndPort = IP::splitHostAndPort( $realServer );
if ( $hostAndPort ) {
$realServer = $hostAndPort[0];
if ( $hostAndPort[1] ) {
$port = $hostAndPort[1];
}
if ( $hostAndPort[2] ) {
$socketname = $hostAndPort[2];
}
}
and then modified the call to real_connect() to also specify a socket
if ( $mysqli->real_connect( $realServer, $this->mUser,
$this->mPassword, $this->mDBname, $port, $socketname, $connFlags ) )

YII SQL Query Optimization

I have a huge list of IDs that i need to query through a table to find if those IDs are available in the table, if yes fetch its model.
Since there are few thousands of IDs this process is really slow as I'm using CActiveRecord::find() mothod
ex. $book = Book::model()->find('book_id=:book_id', array(':book_id'=>$product->book_id));
I even indexed all possible keys, still no improvement.
Any suggestions to improve the execution speed?
thanks in advance :)
1)
Make a list of book ids
foreach $product in Product-List
$book_ids[$product->book_id] = $product->book_id;
Now query all Book models ( indexed by book_id )
$books = Book::model()->findAll(array(
'index' => 'book_id',
'condition' => 'book_id IN (' . implode(',', $book_ids). ')',
));
Integrate $books in your code, I believe you are looping through all products.
foreach $product in Product-List
if( isset($books[$product->book_id]) )
$model = $books[$product->book_id]
2) Another way (I am just assuming you have Product model)
in Product model add a relation to Book
public function relations() {
.......
'book'=>array(self::HAS_ONE, 'Book', 'book_id'),
.......
}
While retrieving your product list, add 'with' => array('book') condition, with any of CActiveDataProvider or CActiveRecord ...
//Example
$productList = Product::model()->findAll(array(
'with' => array('book'),
));
foreach( $productList as $product ) {
.......
if( $product->book != null )
$model = $product->book;
......
}
with either way you can reduce SQL queries.
Better if you use schema caching because Yii fetches schema each time we execute a query. It will improve your query performance.
You can enable schema caching by doing some configuration in config/main.php file.
return array(
......
'components'=>array(
......
'cache'=>array(
'class'=>'system.caching.CApcCache', // caching type APC cache
),
'db'=>array(
...........
'schemaCachingDuration'=>3600, // life time of schema caching
),
),
);
One more thing you can fetch specific column of the table that will improve performance also.
You can do it by using CDbCriteria with find method of CActiveRecord.
$criteria = new CDbCriteria;
$criteria->select = 'book_id';
$criteria->condition = 'book_id=:book_id';
$criteria->params = array(':book_id'=>$product->book_id);
$book = Book::model()->find($criteria);
I would suggest you to use any nosql database if you are processing thousands of records if that is suitable.