yii2 codeception application class wrong - yii2

When I change the application model in my acceptance test, the test acutally uses that model, when I do the same in functional test... the test still uses yii/web/application
I need it to use my common/compontents/application model.
How can I change that ?
The functional _bootstrap contains my custom model... (common/compontents/application)
I am totally baffled....
When I run my testing code:
use tests\codeception\frontend\FunctionalTester;
$I = new FunctionalTester($scenario);
$I->amOnPage('/');
Then I get the error:
[yii\base\UnknownPropertyException] Getting unknown property: yii\web\Application::nowSQL
This nowSQL is defined in common\components\application, but somehow this functional test uses the default
Acceptance yml
# Codeception Test Suite Configuration
# suite for acceptance tests.
# perform tests in browser using the Selenium-like tools.
# powered by Mink (http://mink.behat.org).
# (tip: that's what your customer will see).
# (tip: test your ajax and javascript by one of Mink drivers).
# RUN `build` COMMAND AFTER ADDING/REMOVING MODULES.
class_name: AcceptanceTester
modules:
enabled:
- PhpBrowser
- tests\codeception\common\_support\FixtureHelper
# you can use WebDriver instead of PhpBrowser to test javascript and ajax.
# This will require you to install selenium. See http://codeception.com/docs/04-AcceptanceTests#Selenium
# "restart" option is used by the WebDriver to start each time per test-file new session and cookies,
# it is useful if you want to login in your app in each test.
# - WebDriver
config:
PhpBrowser:
# PLEASE ADJUST IT TO THE ACTUAL ENTRY POINT WITHOUT PATH INFO
url: http://example.com
# WebDriver:
# url: http://localhost:8080
# browser: firefox
# restart: true
Functional .YML
# Codeception Test Suite Configuration
# suite for functional (integration) tests.
# emulate web requests and make application process them.
# (tip: better to use with frameworks).
# RUN `build` COMMAND AFTER ADDING/REMOVING MODULES.
#basic/web/index.php
class_name: FunctionalTester
modules:
enabled:
- Filesystem
- Yii2
- tests\codeception\common\_support\FixtureHelper
config:
Yii2:
configFile: '../config/frontend/functional.php'

In order to use your custom application class for functional tests, set the 'class' configuration in your 'functional.php' config.
functional.suite.yml:
class_name: FunctionalTester
modules:
enabled:
- Yii2
config:
Yii2:
configFile: 'codeception/config/functional.php'
functional.php:
return [
'class' => 'my\custom\Application',
...
];
Regarding acceptance testing you have to change the Application implementation used in your index-text.php:
[...]
(new my\custom\Application($config))->run();

Related

Kibana cannot capture the JSON logs start with "code" ="0000000"

Hello I am very new to elastic stack and I am trying to extract useful information based on the system logs (in JSON format) in Kibana. I am using filebeat to send to kibana in elastic cloud.
Problem: basically, I cannot view the data i send to kibana in the following format
{"code":"28000","file":"auth.c","length":164,"level":"error","line":"496","message":"no pg_hba.conf entry",everity":"FATAL","timestamp":"2022-10-05 08:40:14.308"}
Kibana is ignoring all the lines start with "code":"000000".
however, the following formatted logs have no prob.
{"code":"SELF_SIGNED_CERT_IN_CHAIN","level":"error","message":"self signed certificate in certificate chain,"requestId":"166666666623piq9lzd2ngllllll","timestamp":"2022-10-05 08:40:39.908"}
and
{"level":"error","message":"Login was refused using authentication mechanism PLAIN.","requestId":"199999999udchsuz8d2s5aoszqw6w7","stack":"Error: Handshake terminated by server","timestamp":"2022-10-05 08:57:37.330"}
Please see the filebeat.yml below.
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: log
# Unique ID among all inputs, an ID is required.
# id: my-filestream-id
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\bi\hd-be-logs\hd-be-logs\combined\*.log
#- c:\programdata\elasticsearch\logs\*
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true
json.close_inactive: true
json.clean_removed: true
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
cloud.id: My_deployment:cloudid
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
cloud.auth: elastic:4----------F
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
# api_key: "changeme"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Test databases in gitlab ci

I would like to use test databases for feature branches.
Of course it would be best to create a gitlab ci environment on the fly (review apps style) and also create a test database on the target system with the same name. Unfortunately, this is not possible because the MySQL databases in the target system have fixed names, like xxx_1, xxx_2 etc. and this cannot be changed without moving to a different hosting provider.
So I would like to do something like "grab an empty test data base from the given xxx_n and then empty it again when the branch is deleted".
How could this be handled with gitlab ci?
Can I set a variable on the project that says "feature branch Y already uses database xxx_4"?
Or should I put a table into the test database to store this information?
Using dynamic environments/variables and stop jobs might be able to do the trick. Stop jobs will run when the environment is "stopped" -- in the case of feature branches without associated MRs, when the feature branch is deleted (or if there is an open MR for the review app, when the MR is merged or closed)
Can I set a variable on the project that says "feature branch Y already uses database xxx_4"?
One way may be to put the db name directly in the environment name. Then the Environments API keeps track of this.
stages:
- pre-deploy
- deploy
determine_database:
stage: pre-deploy
image: python:3.9-slim
script:
- pip install python-gitlab
- database_name=$(determine-database) # determine what database names are not currently in use
- echo "database_name=${database_name}" > vars.env
artifacts:
reports: # automatically set $database_name variable in subsequent jobs
dotenv: "vars.env"
deploy_review_app:
stage: deploy
environment:
name: review/$CI_COMMIT_REF_SLUG/$database_name
on_stop: teardown
script:
- echo "deploying review app for $CI_COMMIT_REF with database name configuration $database_name"
- ... # steps to actually do the deploy
teardown: # this will trigger when the environment is stopped
stage: deploy
variables:
GIT_STRATEGY: none # ensures this works even if the branch is deleted
when: manual
script:
- echo "tearing down test database $database_name"
- ... # actual script steps to stop env and cleanup database
environment:
name: review/$CI_COMMIT_REF_SLUG/$database_name
action: "stop"
The implementation of the determine-database command may have to connect to your database to determine what database names are available (or perhaps you have a set of these provisioned in advance). You can then inspect the GitLab environments API to see what database names are still in use (since it's baked into the environment name).
For example, you might have something like this. Here, I am using the python-gitlab API wrapper just because it's most familiar to me, but the same principle can be applied to any method of calling the GitLab REST API.
#!/usr/bin/env python3
import gitlab
import os, sys, random
GITLAB_URL = os.environ['CI_SERVER_URL']
PROJECT_TOKEN = os.environ['MY_PROJECT_TOKEN'] # you generate and add this to your CI/CD variables!
PROJECT_ID = os.environ['CI_PROJECT_ID']
DATABASE_NAMES = ['xxx_1', 'xxx_2', 'xxx_3'] # or determine this programmatically by connecting to the DB
gl = gitlab.Gitlab(GITLAB_URL, private_token=PROJECT_TOKEN)
in_use_databases = []
project = gl.projects.get(PROJECT_ID)
for environment in project.environments.list(state='available', all=True):
# the in-use database name is the string after the last '/' in the env name
in_use_db_name = environment.name.split('/')[-1]
in_use_databases.append(in_use_db_name)
available_databases = [name for name in DATABASE_NAMES if name not in in_use_databases]
if not available_databases: # bail if all databases are in use
print('FATAL. no available databases', file=sys.stderr)
raise SystemExit(1)
# otherwise pick one and output to stdout
db_name = random.choice(available_databses)
# optionally you could prepare the database here, too, instead of relying on the `on_stop` job.
print(db_name)
There is a potential concurrency problem here (two runs of determine_database concurrently on different branches can potentially select the same db twice before either finish) but that could be addressed with resource locks.

Setting up a CSV Data Adapter locally

I am trying to set up the Data Visualization extension to use data from csv file for the sensors based on this example:
https://forge.autodesk.com/en/docs/dataviz/v1/developers_guide/advanced_topics/csv_adapter/
So the csv data I am trying to use is the default Hyperion-1.csv in folder server\gateways\csv. Do I need to add/change some other settings as well?
It is showing the following error in Chrome console:
I have these settings for the csv in .env file.
And these in devices.json in server\gateways\synthetic-data folder.
I've just taken the following steps to enable the CSV data adapter which seemed to work fine:
Clone the repo: git clone https://github.com/Autodesk-Forge/forge-dataviz-iot-reference-app
Install dependencies: npm install
Create a copy of server/env_template and rename it to server/.env
Modify the contents of server/.env, commenting out all the initial env. variables, uncommenting the CSV-related env. vars, and setting their corresponding values:
# FORGE_CLIENT_ID=
# FORGE_CLIENT_SECRET=
# FORGE_ENV=AutodeskProduction
# FORGE_API_URL=https://developer.api.autodesk.com
# FORGE_CALLBACK_URL=http://localhost:9000/oauth/callback
#
# FORGE_BUCKET=
# ENV=local
# ADAPTER_TYPE=local
## Please uncomment the following part if you want to connect to Azure IoTHub and Time Series Insights
## Connect to Azure IoTHub and Time Series Insights
# ADAPTER_TYPE=azure
# AZURE_IOT_HUB_CONNECTION_STRING=
# AZURE_TSI_ENV=
#
## Azure Service Principle
# AZURE_CLIENT_ID=
# AZURE_APPLICATION_SECRET=
# AZURE_TENANT_ID=
# AZURE_SUBSCRIPTION_ID=
#
## Path to Device Model configuration File
# DEVICE_MODEL_JSON=
## End - Connect to Azure IoTHub and Time Series Insights
## Please uncomment the following part if you want to use a CSV file as the time series provider
ADAPTER_TYPE=csv
CSV_MODEL_JSON=server/gateways/synthetic-data/device-models.json
CSV_DEVICE_JSON=server/gateways/synthetic-data/devices.json
CSV_FOLDER=server/gateways/csv/
CSV_DATA_START=2011-02-01T08:00:00.000Z
CSV_DATA_END=2011-02-20T13:51:10.511Z
CSV_DELIMITER="\t"
CSV_LINE_BREAK="\n"
CSV_TIMESTAMP_COLUMN="time"
CSV_FILE_EXTENSION=".csv"
## End - Please uncomment the following part if you want to use a CSV file as the time series provider
Run the app with ENV set to "local": ENV=local npm run dev
After these steps the app is running successfully, however you'll get some other errors because the server/gateways/csv folder only contains data for a single sensor (Hyperion-1).
Btw. I've been working on an alternative DataViz sample app that aims to be simpler and easier to reuse: https://github.com/petrbroz/forge-iot-extensions-demo (which uses https://github.com/petrbroz/forge-iot-extensions under the hood).

What is difference between with and env

I am new to github actions, and I see two things used for configuring the steps (correct me if i am wrong), with and env.
What is the difference between these two and how they are used.
uses: someAction
with:
x: 10
y: 20
env:
x1: 30
y2: 40
with - is specifically used for passing parameters to the action
env - is used specifically for introducing environment variables that can be accessed depending on scope of the resource
workflow envs - can be accessed by all resources in the workflow except services
job envs - can be accessed by all resources under job except services
step envs - can be accessed by any resource within the step
Here is an example on how parameters are handled
Let's say an action is created with following parameter in action.yaml
name: 'Npm Audit Action'
inputs:
dirPath:
description: 'Directory path of the project to audit'
required: true
default: './'
Then we will provide this parameter through the with tag in our workflow
- name: Use the action
uses: meroware/npm-audit-action#v1.0.2
with:
dirPath: vulnerable-project
Then in the action code we would handle it as such if building a Node.js action
const core = require("#actions/core");
const dirPath = core.getInput("dirPath");
Envs within actions are accessed differently, let's say we are building a Node.js action, then we would access it through process.env. Going back to our example action
name: 'Npm Audit Action'
env:
SOME_ENV: 'hey I am an env'
Then this could be accessed as
const { someEnv: SOME_ENV } = process.env
You can see in the documentation with: used to define a variable.
While env defines an environmnent variable, as defined here and in jobs.<job_id>.env
an environment variable defined in a step will override job and workflow variables with the same name, while the step executes.
A variable defined for a job will override a workflow variable with the same name, while the job executes.
You need both to access secrets:
steps:
- name: Hello world action
with: # Set the secret as an input
super_secret: ${{ secrets.SuperSecret }}
env: # Or as an environment variable
super_secret: ${{ secrets.SuperSecret }}

Why is rails trying to connect to mysql?

I have been using mysql in a new rails application, but now I wanted to give mongoDB a try so I installed mongo mapper and mongoid (to use mongo session). The installation seems to be fine because I can create mongo models. But for some reason rails is still trying to connect to mysql: Can't connect to local MySQL server.
This is horrible, because even if I wasn't using mongo, rails shouldn't be trying to connect to mysql for every request. It's throwing that error even for non-existent urls.
What can I do to debug this? I guess I could try removing the mysql gem from the Gemfile and running bundle install. But I still don't like the fact that it's trying to connect even when I'm not using it. Shouldn't it try to connect 'lazily' (ie: only on demand)?
development.rb:
Myapp::Application.configure do
# Settings specified here will take precedence over those in config/application.rb
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
# Log error messages when you accidentally call methods on nil.
config.whiny_nils = true
# Show full error reports and disable caching
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send
config.action_mailer.raise_delivery_errors = false
# Print deprecation notices to the Rails logger
config.active_support.deprecation = :log
# Only use best-standards-support built into browsers
config.action_dispatch.best_standards_support = :builtin
# Raise exception on mass assignment protection for Active Record models
config.active_record.mass_assignment_sanitizer = :strict
# Log the query plan for queries taking more than this (works
# with SQLite, MySQL, and PostgreSQL)
config.active_record.auto_explain_threshold_in_seconds = 0.5
# Do not compress assets
config.assets.compress = false
# Expands the lines which load the assets
config.assets.debug = true
end
application.rb:
require File.expand_path('../boot', __FILE__)
require 'rails/all'
if defined?(Bundler)
# If you precompile assets before deploying to production, use this line
Bundler.require(*Rails.groups(:assets => %w(development test)))
# If you want your assets lazily compiled in production, use this line
# Bundler.require(:default, :assets, Rails.env)
end
module Myapp
class Application < Rails::Application
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
# Custom directories with classes and modules you want to be autoloadable.
# config.autoload_paths += %W(#{config.root}/extras)
# Only load the plugins named here, in the order given (default is alphabetical).
# :all can be used as a placeholder for all plugins not explicitly named.
# config.plugins = [ :exception_notification, :ssl_requirement, :all ]
# Activate observers that should always be running.
# config.active_record.observers = :cacher, :garbage_collector, :forum_observer
# Set Time.zone default to the specified zone and make Active Record auto-convert to this zone.
# Run "rake -D time" for a list of tasks for finding time zone names. Default is UTC.
# config.time_zone = 'Central Time (US & Canada)'
# The default locale is :en and all translations from config/locales/*.rb,yml are auto loaded.
# config.i18n.load_path += Dir[Rails.root.join('my', 'locales', '*.{rb,yml}').to_s]
# config.i18n.default_locale = :de
# Configure the default encoding used in templates for Ruby 1.9.
config.encoding = "utf-8"
# Configure sensitive parameters which will be filtered from the log file.
config.filter_parameters += [:password]
# Use SQL instead of Active Record's schema dumper when creating the database.
# This is necessary if your schema can't be completely dumped by the schema dumper,
# like if you have constraints or database-specific column types
# config.active_record.schema_format = :sql
# Enforce whitelist mode for mass assignment.
# This will create an empty whitelist of attributes available for mass-assignment for all models
# in your app. As such, your models will need to explicitly whitelist or blacklist accessible
# parameters by using an attr_accessible or attr_protected declaration.
# config.active_record.whitelist_attributes = true
# Enable the asset pipeline
config.assets.enabled = true
# Version of your assets, change this if you want to expire all your assets
config.assets.version = '1.0'
config.generators do |g|
g.orm :mongo_mapper
end
end
end
When ActiveRecord is part of application, it tries to establish connection to database at startup. If it fails to connect, application won't start.
The problem is here:
require 'rails/all'
This line includes all "usual" rails components, ActiveRecord among them. If you go to its definition, it should look like this (for rails 3.2):
require "rails"
%w(
active_record
action_controller
action_mailer
active_resource
rails/test_unit
sprockets
).each do |framework|
begin
require "#{framework}/railtie"
rescue LoadError
end
end
Take this code, remove active_record line and put it instead your rails/all line. Now, ActiveRecord isn't included and your application will loudly fail when it sees ActiveRecord references in the code, like this:
config.active_record.mass_assignment_sanitizer = :strict
You need to remove these too. You don't need to delete database.yml, but you probably should, since it has no meaning now.
You need delete the line where you configure activerecord :
# Raise exception on mass assignment protection for Active Record models
config.active_record.mass_assignment_sanitizer = :strict
# Log the query plan for queries taking more than this (works
# with SQLite, MySQL, and PostgreSQL)
config.active_record.auto_explain_threshold_in_seconds = 0.5
i presume you've checked your database.yml to make sure nothing is looking for adapter: mysql?
Also if you've left mysql in your Gemfile then the mysql gem will bomb out as it can't do it's part and therefore has to fail as a dependency. Remove it + rerun.