php-cs-fixer - disable/modify no_superfluous_phpdoc_tags rule - configuration

I want to disable no_superfluous_phpdoc_tags rule.
/**
* Refund a list of payments.
*
* #param float $amount
* #param array $charges
*
* #throws PaymentException
*
* #return boolean
*
*/
With the rule defaults it turns the above into this.
/**
* Refund a list of payments.
*
* #throws PaymentException
*
*/
I tried this:
./vendor/bin/php-cs-fixer fix -vvv --rules='{"no_superfluous_phpdoc_tags": {"allow_unused_params": "true"}}'
But it doesn't affect the result (still removes the params). What can I do to not remove these params?

In the php_cs.dist file:
'no_superfluous_phpdoc_tags' => false,
or in the console:
vendor/bin/php-cs-fixer fix .php_cs.dist --rules='{"no_superfluous_phpdoc_tags": false}'

Related

How to verify dropdown values for non-strings in TestCafe?

/**
* Helper function to click dropdowns and select the passed text option
*
* #param {Selector} selector - Selector that picks the select web element
* #param {string} text - Text value option to be selected
*/
async function selectDropDownOption(selector, text) {
let option = selector.find("option");
await t.scrollIntoView(selector).click(selector).click(option.withExactText(text));
}
Question: this is for text option but can we do it for "Select web element"?

PHPMyAdmin Database connection errors

I ran query to this effect:
SELECT x.minid FROM (SELECT p.post_title, MIN(p.ID) as minid, m.meta_value
FROM
wp_postmeta m
INNER JOIN wp_posts p
ON p.id = m.post_id
AND p.post_type = 'Product'
WHERE
m.meta_key = '_regular_price'
AND NOT EXISTS (
SELECT 1
FROM
wp_postmeta m1
INNER JOIN wp_posts p1
ON p1.id = m1.post_id
AND p1.post_type = 'Product'
WHERE
m1.meta_key = '_regular_price'
AND p1.post_title = p.post_title
AND m1.meta_value < m.meta_value
And after a long time running the query, it finally threw up this error:
Error
SQL query: DocumentationEdit Edit
SELECT `CHARACTER_SET_NAME` AS `Charset`, `DESCRIPTION` AS `Description` FROM `information_schema`.`CHARACTER_SETS`
Open new phpMyAdmin window
mysqli_connect(): (08004/1040): Too many connections
mysqli_real_connect(): (28000/1045): Access denied for user 'ptrcao'#'localhost' (using password: NO)
mysqli_real_connect(): (28000/1045): Access denied for user 'ptrcao'#'localhost' (using password: NO)
Notice in ./libraries/classes/Config.php#855
Undefined index: collation_connection
Backtrace
./libraries/classes/Config.php#968: PhpMyAdmin\Config->_setConnectionCollation()
./libraries/common.inc.php#453: PhpMyAdmin\Config->loadUserPreferences()
./index.php#26: require_once(./libraries/common.inc.php)
Notice in ./libraries/classes/DatabaseInterface.php#1501
Undefined index: charset_connection
Backtrace
./libraries/classes/Config.php#857: PhpMyAdmin\DatabaseInterface->setCollation(string 'utf8mb4_unicode_ci')
./libraries/classes/Config.php#968: PhpMyAdmin\Config->_setConnectionCollation()
./libraries/common.inc.php#453: PhpMyAdmin\Config->loadUserPreferences()
./index.php#26: require_once(./libraries/common.inc.php)
Failed to set configured collation connection!
Notice in ./index.php#264
Undefined variable: collation_connection
Backtrace
Previously, I was having problems with the queries maxing out my memory resources. I then added on 16G of swap memory as a stopgap measure. It seemed to staved off the #2013 - Lost connection to MySQL server during query error encountered previously, but instead after a long time running, the query produced all the above errors. Please advise.
/usr/local/cpanel/base/3rdparty/phpMyAdmin/libraries/classes/DatabaseInterface.php:
<?php
/* vim: set expandtab sw=4 ts=4 sts=4: */
/**
* Main interface for database interactions
*
* #package PhpMyAdmin-DBI
*/
namespace PhpMyAdmin;
use PhpMyAdmin\Core;
use PhpMyAdmin\Database\DatabaseList;
use PhpMyAdmin\Dbi\DbiExtension;
use PhpMyAdmin\Dbi\DbiDummy;
use PhpMyAdmin\Dbi\DbiMysql;
use PhpMyAdmin\Dbi\DbiMysqli;
use PhpMyAdmin\Di\Container;
use PhpMyAdmin\Error;
use PhpMyAdmin\Index;
use PhpMyAdmin\LanguageManager;
use PhpMyAdmin\Relation;
use PhpMyAdmin\SystemDatabase;
use PhpMyAdmin\Table;
use PhpMyAdmin\Types;
"DatabaseInterface.php" 3060L, 104788C
/usr/local/cpanel/base/3rdparty/phpMyAdmin/libraries/classes/Config.php:
<?php
/* vim: set expandtab sw=4 ts=4 sts=4: */
/**
* Configuration handling.
*
* #package PhpMyAdmin
*/
namespace PhpMyAdmin;
use DirectoryIterator;
use PhpMyAdmin\Core;
use PhpMyAdmin\Error;
use PhpMyAdmin\LanguageManager;
use PhpMyAdmin\ThemeManager;
use PhpMyAdmin\Url;
use PhpMyAdmin\UserPreferences;
use PhpMyAdmin\Util;
use PhpMyAdmin\Utils\HttpRequest;
/**
* Indication for error handler (see end of this file).
*/
$GLOBALS['pma_config_loading'] = false;
/**
* Configuration class
*
* #package PhpMyAdmin
*/
class Config
{
/**
* #var string default config source
*/
var $default_source = './libraries/config.default.php';
/**
* #var array default configuration settings
*/
var $default = array();
/**
* #var array configuration settings, without user preferences applied
*/
var $base_settings = array();
/**
* #var array configuration settings
*/
var $settings = array();
/**
* #var string config source
*/
var $source = '';
/**
* #var int source modification time
*/
var $source_mtime = 0;
var $default_source_mtime = 0;
"Config.php" 1800L, 57604C
/usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php:
<?php
/* vim: set expandtab sw=4 ts=4 sts=4: */
/**
* Main loader script
*
* #package PhpMyAdmin
*/
use PhpMyAdmin\Charsets;
use PhpMyAdmin\Config;
use PhpMyAdmin\Core;
use PhpMyAdmin\Display\GitRevision;
use PhpMyAdmin\LanguageManager;
use PhpMyAdmin\Message;
use PhpMyAdmin\RecentFavoriteTable;
use PhpMyAdmin\Relation;
use PhpMyAdmin\Response;
use PhpMyAdmin\Sanitize;
use PhpMyAdmin\Server\Select;
use PhpMyAdmin\ThemeManager;
use PhpMyAdmin\Url;
use PhpMyAdmin\Util;
/**
* Gets some core libraries and displays a top message if required
*/
require_once 'libraries/common.inc.php';
/**
* pass variables to child pages
*/
$drops = array(
'lang',
'server',
'collation_connection',
'db',
'table'
);
foreach ($drops as $each_drop) {
if (array_key_exists($each_drop, $_GET)) {
unset($_GET[$each_drop]);
}
}
unset($drops, $each_drop);
/*
* Black list of all scripts to which front-end must submit data.
* Such scripts must not be loaded on home page.
*
*/
$target_blacklist = array (
'import.php', 'export.php'
);
// If we have a valid target, let's load that script instead
if (! empty($_REQUEST['target'])
&& is_string($_REQUEST['target'])
&& ! preg_match('/^index/', $_REQUEST['target'])
&& ! in_array($_REQUEST['target'], $target_blacklist)
&& Core::checkPageValidity($_REQUEST['target'], [], true)
) {
include $_REQUEST['target'];
"index.php" 680L, 19899C

ClickListener only works for elements in Table Libgdx

I have a table extended main class (table_A). Inside table_A class i have a method which function is to add new table (table_b) into my table_A (for achieving scrollpane result), at the same time assign click listener to the table_b.
I though by adding clicklistener (ck) to table_b will makes table_b detectable. However, only the elements in the table_b able to make detection from clicklistener.
Please help
My table_A adding table function as follow:
public void addRow(Table newRow){
scrollTable.row();
scrollTable.add(newRow).width(newRow.getWidth()).pad(10);
newRow.addListener(ck);
newRow.debug();
newRow.setName(stage++ +"");
}
My debug table goes as:
My table_b codes goes as:
add(challengeLabel).colspan(10).expandX().align(Align.left).fill();
row();
add(star1).colspan(1).size(starSpaceWidth, star1.getHeight() / star1.getWidth() * starSpaceWidth).expandX().fillX().align(Align.right);
add(star2).colspan(1).size(starSpaceWidth, star1.getHeight() / star1.getWidth() * starSpaceWidth);
add(star3).colspan(1).size(starSpaceWidth, star1.getHeight() / star1.getWidth() * starSpaceWidth);
add(star4).colspan(1).size(starSpaceWidth, star1.getHeight() / star1.getWidth() * starSpaceWidth);
add(star5).colspan(1).size(starSpaceWidth, star1.getHeight() / star1.getWidth() * starSpaceWidth);
If you take a look in Table's constructor, you see this line:
setTouchable(Touchable.childrenOnly);
It is causing your troubles. Change it for your table_B.

What's the best way to scrape specific content from multiple HTML files?

I have quite a few HTML files of webpages with many pieces of information. I am trying to extract some of the content and place it into an xml file or possible an excel spreadsheet. All webpages are quite similar by design and the information is placed in the same locations across all pages. Does anybody know of any way to do this?
there are many scraper library which can help you to extract data from html pages
Web scraping and crawling is not always so straightforward, so it depends on what you’re trying to achieve. Different products, SDK, libraries, etc., focus on different aspects of scraping or crawling. Here are a few you can check out:
Apify - (formerly Apifier) is a cloud-based web scraper that extracts structured data from any website using a few simple lines of JavaScript.
Diffbot - which extracts data from web pages automatically and returns structured JSON.
`
Espion
- a headless browser that enables you to inject JavaScript code directly into your target web pages.
Also if you have knowledge of Node Js then node-osmosis is realy cool and easy to use library
I strong recommend you this library:
http://sourceforge.net/projects/simplehtmldom/
/**
* Website: http://sourceforge.net/projects/simplehtmldom/
* Acknowledge: Jose Solorzano (https://sourceforge.net/projects/php-html/)
* Contributions by:
* Yousuke Kumakura (Attribute filters)
* Vadim Voituk (Negative indexes supports of "find" method)
* Antcs (Constructor with automatically load contents either text or file/url)
*
* all affected sections have comments starting with "PaperG"
*
* Paperg - Added case insensitive testing of the value of the selector.
* Paperg - Added tag_start for the starting index of tags - NOTE: This works but not accurately.
* This tag_start gets counted AFTER \r\n have been crushed out, and after the remove_noice calls so it will not reflect the REAL position of the tag in the source,
* it will almost always be smaller by some amount.
* We use this to determine how far into the file the tag in question is. This "percentage will never be accurate as the $dom->size is the "real" number of bytes the dom was created from.
* but for most purposes, it's a really good estimation.
* Paperg - Added the forceTagsClosed to the dom constructor. Forcing tags closed is great for malformed html, but it CAN lead to parsing errors.
* Allow the user to tell us how much they trust the html.
* Paperg add the text and plaintext to the selectors for the find syntax. plaintext implies text in the innertext of a node. text implies that the tag is a text node.
* This allows for us to find tags based on the text they contain.
* Create find_ancestor_tag to see if a tag is - at any level - inside of another specific tag.
* Paperg: added parse_charset so that we know about the character set of the source document.
* NOTE: If the user's system has a routine called get_last_retrieve_url_contents_content_type availalbe, we will assume it's returning the content-type header from the
* last transfer or curl_exec, and we will parse that and use it in preference to any other method of charset detection.
*
* Found infinite loop in the case of broken html in restore_noise. Rewrote to protect from that.
* PaperG (John Schlick) Added get_display_size for "IMG" tags.
*
* Licensed under The MIT License
* Redistributions of files must retain the above copyright notice.
*
* #author S.C. Chen <me578022#gmail.com>
* #author John Schlick
* #author Rus Carroll
* #version 1.5 ($Rev: 196 $)
* #package PlaceLocalInclude
* #subpackage simple_html_dom
*/
/**
* All of the Defines for the classes below.
* #author S.C. Chen <me578022#gmail.com>
*/
here's an example
$html = file_get_html($ad_bachecubano_url);
//Proceder a capturar el texto
$anuncio['header'] = $html->find('.headingText', 0)->plaintext;
$anuncio['body'] = $html->find('.showAdText', 0)->plaintext;
$precio = $html->find('#lineBlock');
foreach ($precio as $possibleprice) {
$item = $possibleprice->find('.headingText2', 0)->plaintext;
$precio = 0;
if ($item == "Precio: ") {
$precio = $possibleprice->find('.normalText', 0)->plaintext;
$anuncio['price'] = $this->getFinalPrice($precio);
} else {
continue;
}
}
$contactbox = $html->find('#contact');
foreach ($contactbox as $contact) {
$boxes = $contact->find('#lineBlock');
foreach ($boxes as $box) {
$key = $box->find('.headingText2', 0)->plaintext;
$value = $box->find('.normalText', 0)->plaintext;
if ($key == "Nombre: ") {
$anuncio['nombre'] = $value;
}
if ($key == "Teléfono: ") {
$anuncio['phone'] = $value;
}
}
}
$anuncio['email'] = scrapeemail($anuncio['body'])[0][0];
if (!isset($anuncio['email']) || $anuncio['email'] == '') {
$anuncio['email'] = "";
}

Using json API pull to store in file or database

I am trying to pull data from the justin.tv API and store the echo I get in the below code in to a database or a file to be included in the sidebar of website. I am not sure on how to do this. The example of what I am trying to achieve is the live streamers list on the sidebar of teamliquid.net. Which I have done but doing it the way I have done it slows the site way down because it does about 50 json requests every time the page loads. I just need to get this in to a cached file that updates every 60 seconds or so. Any ideas?
<?php
$json_file = file_get_contents("http://api.justin.tv/api/stream/list.json?channel=colcatz");
$json_array = json_decode($json_file, true);
if ($json_array[0]['name'] == 'live_user_colcatz') echo 'coL.CatZ Live<br>';
$json_file = file_get_contents("http://api.justin.tv/api/stream/list.json?channel=coldrewbie");
$json_array = json_decode($json_file, true);
if ($json_array[0]['name'] == 'live_user_coldrewbie') echo 'coL.drewbie Live<br>';
?>
I'm not entirely sure how you would imagine this being cached, but the code below is an adaption of a block of code I've used in the past for some Twitter work. There are a few things that could probably be done better from a security perspective. Anyway, this gives you a generic way of grabbing the Feed, parsing through it, and then sending it to the database.
Warning: This assumes that there is a database connection already established within your own system.
(* Make sure you scroll to the bottom of the code window *)
/**
* Class SM
*
* Define a generic wrapper class with some system
* wide functionality. In this case we'll give it
* the ability to fetch a social media feed from
* another server for parsing and possibly caching.
*
*/
class SM {
private $api, $init, $url;
public function fetch_page_contents ($url) {
$init = curl_init();
try {
curl_setopt($init, CURLOPT_URL, $url);
curl_setopt($init, CURLOPT_HEADER, 0);
curl_setopt($init, CURLOPT_RETURNTRANSFER, 1);
} catch (Exception $e) {
error_log($e->getMessage());
}
$output = curl_exec($init);
curl_close($init);
return $output;
}
}
/**
* Class JustinTV
*
* Define a specific site wrapper for getting the
* timeline for a specific user from the JustinTV
* website. Optionally you can return the code as
* a JSON string or as a decoded PHP array with the
* $api_decode argument in the get_timeline function.
*
*/
class JustinTV extends SM {
private $timeline_document,
$api_user,
$api_format,
$api_url;
public function get_timeline ($api_user, $api_decode = 1, $api_format = 'json', $api_url = 'http://api.justin.tv/api/stream/list') {
$timeline_document = $api_url . '.' . $api_format . '?channel=' . $api_user;
$SM_init = new SM();
$decoded_json = json_decode($SM_init->fetch_page_contents($timeline_document));
// Make sure that our JSON is really JSON
if ($decoded_json === null && json_last_error() !== JSON_ERROR_NONE) {
error_log('Badly formed, dangerous, or altered JSON string detected. Exiting program.');
}
if ($api_decode == 1) {
return $decoded_json;
}
return $SM_init->fetch_page_contents($timeline_document);
}
}
/**
* Instantiation of the class
*
* Instantiate our JustinTV class, fetch a user timeline
* from JustinTV for the user colcatz. The loop through
* the results and enter each of the individual results
* into a database table called cache_sm_justintv.
*
*/
$SM_JustinTV = new JustinTV();
$user_timeline = $SM_JustinTV->get_timeline('colcatz');
foreach ($user_timeline AS $entry) {
// Here you could check whether the entry already exists in the system before you cache it, thus reducing duplicate ID's
$date = date('U');
$query = sprintf("INSERT INTO `cache_sm_justintv` (`id`, `cache_content`, `date`) VALUES (%d, '%s', )", $entry->id, $entry, $date);
$result = mysql_query($query);
// Do some other stuff and then close the MySQL Connection when your done
}