Error while running phpunit test in yii framework - mysql

I just tried to run the unit test of the blog for the yii framework. But I got such error and i am not able to resolve the problem. Please feel free to share with me. Thank you in advance. And I got such error.
Do i need to set up the test database for the phpunit. If so how should i have to do it.
C:\wamp\www\yii\demos\blog\protected\tests>phpunit --verbose unit\CommentTest
C:\wamp\www\yii\demos\blog\protected\tests/../config/test.phpPHPUnit 3.6.10 by S
ebastian Bergmann.
Configuration read from C:\wamp\www\yii\demos\blog\protected\tests\phpunit.xml
EE
Time: 0 seconds, Memory: 7.75Mb
There were 2 errors:
1) CommentTest::testFindRecentComments
CDbException: The table "{{post}}" for active record class "Post" cannot be foun
d in the database.
C:\wamp\www\yii\framework\db\ar\CActiveRecord.php:2264
C:\wamp\www\yii\framework\db\ar\CActiveRecord.php:379
C:\wamp\www\yii\framework\test\CDbFixtureManager.php:301
C:\wamp\www\yii\framework\test\CDbTestCase.php:118
C:\wamp\bin\php\php5.3.9\phpunit:46
2) CommentTest::testApprove
CException: Table 'tbl_post' does not exist.
C:\wamp\www\yii\framework\test\CDbFixtureManager.php:254
C:\wamp\www\yii\framework\test\CDbFixtureManager.php:145
C:\wamp\www\yii\framework\test\CDbFixtureManager.php:305
C:\wamp\www\yii\framework\test\CDbTestCase.php:118
C:\wamp\bin\php\php5.3.9\phpunit:46
FAILURES!
Tests: 2, Assertions: 0, Errors: 2.

Acutally, I found the solution to this problem. It means that it doesn't find the table in the database. So, I executed this schema.mysql.sql file which was under the directory blog/protected/data. And everything worked fine –

Related

Random extremely long request in Rails app

I have a really strange behaviour in my Rails app.
Sometimes, like 1 request every 500, I have a simple ActiveRecord code which takes a lot of time (from 5 sec to X min). Some examples here:
User.where(id: params[:id]).first # or
current_user # (with *Devise*) or
Authentication.where(provider: 'whatever', user_id: 12345).first
I don't really know where to look at: all these requests have appropriate indexes (they usually work in millisecond times), the server load seems constant, ...
Anyone here had this kind of issue before or some ideas on how I could resolve them?
Thanks!
FYI, I work with Rails/ActiveRecord 3.2.17, puma 3.11.4, Docker (ECS on AWS), MySQL 5.6.34

Magento 2 : Exception #0 (Exception): Recoverable Error:

I have moved one magento 2 website from one server to another, after configuration, I got below error on category pages:
1 exception(s):
Exception #0 (Exception): Recoverable Error: Argument 1 passed to Mageplaza\Core\Helper\AbstractData::__construct() must be an instance of Magento\Framework\App\Helper\Context, instance of Magento\Framework\ObjectManager\ObjectManager given, called in /SOME_PATH/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php on line 93 and defined in /SOME_PATH/app/code/Mageplaza/Core/Helper/AbstractData.php on line 56
Exception #0 (Exception): Recoverable Error: Argument 1 passed to Mageplaza\Core\Helper\AbstractData::__construct() must be an instance of Magento\Framework\App\Helper\Context, instance of Magento\Framework\ObjectManager\ObjectManager given, called in /SOME_PATH/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php on line 93 and defined in /SOME_PATH/app/code/Mageplaza/Core/Helper/AbstractData.php on line 56
I have tried below things to resolve above:
Reindexing
Re-save category pages from backend
Created new category, and found its page working fine.
It seems there is a problem with database where old category urls need to be reindexed/rewritten or processed some way.
Can anyone help me to resolve this or any guide how I can troubleshoot this further?
Any help is appreciated!
Thanks
Deleting var/di directory resolves the problem. I didn't need to run any CLI command nor I need to do any cache clear stuff!

How to test whether log compaction is working or not in Kafka?

I have made the changes in server.properties file in Kafka 0.8.1.1 i.e. added log.cleaner.enable=true and also enabled cleanup.policy=compact while creating the topic.
Now when I am testing it, I pushed the following messages to the topic with following (Key, Message).
Offset: 1 - (123, abc);
Offset: 2 - (234, def);
Offset: 3 - (345, ghi);
Offset: 4 - (123, changed)
Now I pushed the 4th message with a same key as an earlier input, but changed the message. Here log compaction should come into picture. And using Kafka tool, I can see all the 4 offsets in the topic. How can I know whether log compaction is working or not? Should the earlier message be deleted, or the log compaction is working fine as the new message has been pushed.
Does it have to do anything with the log.retention.hours or topic.log.retention.hours or log.retention.size configurations? What is the role of these configs in log compaction.
P.S. - I have thoroughly gone through the Apache Documentation, but still it is not clear.
even though this question is a few months old, I just came across it doing research for my own question. I had created a minimal example for seeing how compaction works with Java, maybe it is helpful for you too:
https://gist.github.com/anonymous/f78184eaeec3ee82b15182aec24a432a
Furthermore, consulting the documentation, I used the following configuration on a topic level for compaction to kick in as quickly as possible:
min.cleanable.dirty.ratio=0.01
cleanup.policy=compact
segment.ms=100
delete.retention.ms=100
When run, this class shows that compaction works - there is only ever one message with the same key on the topic.
With the appropriate settings, this would be reproducible on command line.
Actually, the log compaction is visible only when the number of logs reaches to a very high count eg 1 million. So, if you have that much data, it's good. Otherwise, using configuration changes, you can reduce this limit to say 100 messages, and then you can see that out of the messages with the same keys, only the latest message will be there, the previous one will be deleted. It is better to use log compaction if you have full snapshot of your data everytime, otherwise you may loose the previous logs with the same associated key, which might be useful.
In order check a Topics property from CLI you can do it using Kafka-topics cmd :
https://grokbase.com/t/kafka/users/14aev0snbd/command-line-tool-for-topic-metadata
It is a good point to take a look also on log.roll.hours, which by default is 168 hours. In simple words: even in case you have not so active topic and you are not able to fill the max segment size (by default 1G for normal topics and 100M for offset topic) in a week you will have a closed segment with size below log.segment.bytes. This segment can be compacted on next turn.
You can do it with kafka-topics CLI.
I'm running it from docker(confluentinc/cp-enterprise-kafka:6.0.0).
$ docker-compose exec kafka kafka-topics --zookeeper zookeeper:32181 --describe --topic count-colors-output
Topic: count-colors-output PartitionCount: 1 ReplicationFactor: 1 Configs: cleanup.policy=compact,segment.ms=100,min.cleanable.dirty.ratio=0.01,delete.retention.ms=100
Topic: count-colors-output Partition: 0 Leader: 1 Replicas: 1 Isr: 1
but don't get confused if you don't see anything in Config field. It happens if default values were used. So, unless you see cleanup.policy=compact in the output - the topic is not compacted.

How do I get Rails 4.x streaming to work with MySQL when testing?

I created a new Rails 4.2.1 test project to try out the new streaming feature (the 'Live' one which I read about here). This project is set up to use MySQL for the database (I also tried Sqlite but couldn't repro the issue with it). The project is simple, consisting only of: 1) a model Test with 2 attributes (both strings). 2) a simple route resources :tests and 3) a simple controller tests_controller with one action index. The model and controller were generated by the standard rails generators, and only the controller was modified, as follows:
class TestsController < ApplicationController
include ActionController::Live
def index
response.headers['Content-Type'] = 'application/json'
response.stream.write('{"count": 5, "tests": [')
Test.find_each do |test|
response.stream.write(test.to_json)
response.stream.write(',')
end
response.stream.write(']}')
response.stream.close
end
end
When I run rails s and test by hand everything seems fine. But when I added a test (shown below) I get a strange error:
1) Error:
TestsControllerTest#test_index:
ActiveRecord::StatementInvalid: Mysql2::Error: This connection is in use by: #<Thread:0x007f862a4a7e48#/Users/xxx/.rvm/gems/ruby-2.2.2/gems/actionpack-4.2.1/lib/action_controller/metal/live.rb:269 sleep>: ROLLBACK
The test is:
require 'test_helper'
class TestsControllerTest < ActionController::TestCase
test "index" do
#request.headers['Accept'] = 'application/json'
get :index
assert_response :success
end
end
Note that the error is intermittent, coming up only about half the time. Also, even though testing by hand doesn't cause any errors I'm worried that when multiple clients hit the API at the same time that errors will occur. Any suggestions as to what's going on here would be much appreciated.
Pretty old, but you need to actually checkout a new database connection since ActionController::Live executes the action in a new thread:
The final caveat is that your actions are executed in a separate thread than the main thread. Make sure your actions are thread safe, and this shouldn't be a problem (don't share state across threads, etc).
https://github.com/rails/rails/blob/861b70e92f4a1fc0e465ffcf2ee62680519c8f6f/actionpack/lib/action_controller/metal/live.rb
You can even use an around_filter/around_action for this.

foreach within package function: does not work on first call

I am trying to add parallel computation option to an R (netresponse) package based on doMC and multicore. The script works ok, but only on the second trial.
To reproduce the bug, start R and run the script below. It gets stuck on the last line. After interrupting with ctrl-c I get a few messages of "select: Interrupted system call". Then, running the same script again will give the expected result without problems.
Is some further initialization needed to get this work properly already on the first run? Or any other tips?
thanks for your support,
- L
require(netresponse)
require(multicore)
require(doMC)
registerDoMC(3)
print(getDoParWorkers())
res <- foreach(i = 1:100, .combine = cbind,
.packages = "netresponse") %dopar% netresponse::vdp.mixt(matrix(rnorm(1000), 100, 10))
Heres the list of dependencies from the help page for package netresponse: "Depends: methods, igraph, graph, minet". I suspect that you are not getting all of them to the workers by just listing "netresponse" on the .packages argument.
Quick fix for problem with foreach %dopar% is to reinstall these packages:
install.packages("doSNOW")
install.packages("doParallel")
install.packages("doMPI")
As mentioned in various threads at StackOverflow, these are responsible for parallelism in R. Bug which existed in old versions of these packages is now removed. I should mention that it will most likely help even though you are not using these packages in your project/package.