How to keep appium capabilities in json file and call in code - json

Following is my appium capability set to run a test
cap = new DesiredCapabilities();
cap.setCapability(CapabilityType.PLATFORM, "Android");
cap.setCapability(CapabilityType.VERSION, "5.1.0");
cap.setCapability("deviceName", "mygeny510");
cap.setCapability("appPackage", "com.android.dialer");
cap.setCapability("appActivity", "com.android.dialer.DialtactsActivity");
driver = new AndroidDriver<MobileElement>(new URL("http://127.0.0.1:4723/wd/hub"), cap);
I want to keep the capabilities in a apm.json file
[
{
"platformName": "android",
"appPackage":"com.android.dialer",
"appActivity": "com.android.dialer.DialtactsActivity",
"deviceName": "mygeny510"
}
]
Now can anyone help to call the apm.json into the code instead writing each capabilities by using cap.setcapability(,)

You can place all the desired capabilities on to a seperate file and load the file in an other file for referencing it.
For eg,
I have the desired capabilities in env.rb
def abc
{
caps:
{
platformName: "iOS",
deviceName: "",
udid: "",
app: (File.join(File.dirname(__FILE__), "")),
bundleId: "",
automationName: "XCUITest",
xcodeOrgId: "",
xcodeSigningId: "",
platformVersion: "9.3.2",
noReset: "true",
fullReset: "false",
showIOSLog: "true"
}
}
end
Now you can go to the file where you want to launch this desired capabilities. For this, you would need to load the .json file into this file. I have used require_relative to load the file in order to call the method. Once you do that, you can start the session with
def AnyName
Appium::Driver.new(abc) #Pass capabilities for appium inside the driver
Appium.promote_appium_methods Object #Makes all appium_lib methods accessible from steps
$driver.start_driver #Starts appium driver before the tests begin
end
Hope this helps!

Related

Converting Packer 1.6 vsphere-iso configuration code from JSON to HCL2

With the release of Packer 1.6 came several depreciated fields in the vsphere-iso builder. From the looks of it, seems to be a format/type change because the fields actually still exists but just as properties it seems. An example of the changes are the following:
Working in Packer 1.5.6:
JSON
"disk_size": 123456,
"disk_thin_provisioned": true
"network": "VM Network",
"network_card": "vmxnet3"
Working in Packer 1.6.0:
JSON
"storage": [
{
"disk_size": 123456,
"disk_thin_provisioned": true
}
],
"network_adapters": [
{
"network": "VM Network",
"network_card": "vmxnet3"
}
]
The issue I have at the moment is I'm using Packer 1.6.0 and am trying to convert the above working JSON code to HCL2. I can't figure out the HCL2 syntax that supports the changes that were made in Packer 1.6.0.
I've tried the following:
network_adapters = {
network_card = "vmxnet3"
network = "VM Network"
}
Output:
An argument named "network_adapter" is not expected here.
network_adapters = (
network_card = "vmxnet3"
network = "VM Network"
)
Output:
Error: Unbalanced parentheses
on .\Packer\ConfigFileName.pkr.hcl line 19, in source "vsphere-iso"
"Test": 18: storage = ( 19: disk_thin_provisioned = true
Expected a closing parenthesis to terminate the expression.
network_adapters = [
network_card = "vmxnet3",
network = "VM Network"
]
Output:
Error: Missing item separator
on .\Packer\ConfigFileName.pkr.hcl line 19, in source "vsphere-iso"
"Test": 18: storage = [ 19: disk_thin_provisioned =
true,
Expected a comma to mark the beginning of the next item.
I've also tried several other permutations of different collection syntax together with no luck so far. Any suggestions or tips would greatly be appreciated
The correct syntax is the following:
network_adapters {
network_card = "vmxnet3",
network = "VM Network"
}
Note that it's not using an assignment operator = between network_adapters and {
Credit goes to SwampDragons over on the Packer forums for pointing this out.
If you're interested in knowing why: There was a change to how maps are treated in HCL2 back in May 2020 with the release of Packer 1.5.6
core/hcl2: Maps are now treated as settable arguments as opposed to blocks. For example tags = {} instead of tags {} [GH-9035]
Reference: https://github.com/hashicorp/packer/blob/master/CHANGELOG.md#156-may-1-2020

Glue_version and python_version not working in terraform

Hellow everyone,
I am using terraform to create the glue job. Now AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3).
I want to use this feature. but whenever i am making changes it is throwing error.
I am using
aws-cli/1.16.184.
Terraform v0.12.6
aws provider 2.29
resource "aws_glue_job" "aws_glue_job_foo" {
glue_version = "1"
name = "job-name"
description = "job-desc"
role_arn = data.aws_iam_role.aws_glue_iam_role.arn
max_capacity = 1
max_retries = 1
connections = [aws_glue_connection.connection.name]
timeout = 5
command {
name = "pythonshell"
script_location = "s3://bucket/script.py"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--ROLE_ARN" = data.aws_iam_role.aws_glue_iam_role.arn
}
execution_property {
max_concurrent_runs = 1
}
}
But it is throwing error to me,
Error: Unsupported argument
An argument named "glue_version" is not expected here.
This Terraform issue has been resolved.
Terraform aws_glue_job now accepts a glue_version argument.
Previous Answer
With or without python_version in the Terraform command block, I must go to the AWS console to edit the job and set "Glue version". My job fails without this manual step.
Workaround #1
This issue has been reported and debated and includes a workaround.
resource "aws_glue_job" "etl" {
name = "${var.job_name}"
role_arn = "${var.iam_role_arn}"
command {
script_location = "s3://${var.bucket_name}/${aws_s3_bucket_object.script.key}"
}
default_arguments = {
"--enable-metrics" = ""
"--job-language" = "python"
"--TempDir" = "s3://${var.bucket_name}/TEMP"
}
# Manually set python 3 and glue 1.0
provisioner "local-exec" {
command = "aws glue update-job --job-name ${var.job_name} --job-update 'Command={ScriptLocation=s3://${var.bucket_name}/${aws_s3_bucket_object.script.key},PythonVersion=3,Name=glueetl},GlueVersion=1.0,Role=${var.iam_role_arn},DefaultArguments={--enable-metrics=\"\",--job-language=python,--TempDir=\"s3://${var.bucket_name}/TEMP\"}'"
}
}
Workaround #2
Here is a different workaround.
resource "aws_cloudformation_stack" "network" {
name = "${local.name}-glue-job"
template_body = <<STACK
{
"Resources" : {
"MyJob": {
"Type": "AWS::Glue::Job",
"Properties": {
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://${local.bucket_name}/jobs/${var.job}"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "${local.name}",
"Role": "${var.role}"
}
}
}
}
STACK
}
This has been released in version 2.34.0 of the Terraform AWS provider.
It looks like terraform uses python_version instead of glue_version
By using python_version = "3", you should be using glue version 1.0. Glue version 0.9 doesn't support python 3.

Please let me know the script for applying ctb when converting DWG file to PDF

I want to apply ctb when creating activities and converting DWG files to PDF using Design Automation API.
In the PlotToPDF activity, the script was as follows
"Instruction": {
"CommandLineParameters": "-suppressGraphics",
"Script": "_layoutcreateviewport 1 _tilemode 0 -export _pdf _all result.pdf\n"
}
If want to apply a CTB file and convert it to PDF, the script is
How should I write ?
Autodesk Design Automation API define Plot Settings e.g. greyscale/linewidth
I tried the script written here but got an error.
[04/19/2019 00:40:15] Command: -PLOT Detailed plot configuration? [Yes/No] <No>: Y
[04/19/2019 00:40:15] Enter a layout name or [?] <レイアウト1>: Enter an output device name or [?] <なし>: AutoCAD PDF (General Documentation).pc3 Y myCTB.ctb
[04/19/2019 00:40:15] <AutoCAD PDF (General Documentation).pc3 Y myCTB.ctb > not found.
[04/19/2019 00:41:15] Error: AutoCAD Core Console is shut down due to timeout.
[04/19/2019 00:41:15] End script phase. [04/19/2019 00:41:15] Error: An unexpected error happened during phase CoreEngineExecution of job.
I adjusted the command as follows.
-PLOT Y AutoCAD PDF (General Documentation).pc3\n\n\n Y\n\n\n\nY myCTB.ctb\n
The result is an error.
[04/19/2019 01:09:45] Command: -PLOT Detailed plot configuration? [Yes/No] <No>: Y
[04/19/2019 01:09:45] Enter a layout name or [?] <レイアウト1>: Enter an output device name or [?] <なし>: AutoCAD PDF (General Documentation).pc3
[04/19/2019 01:09:45] Enter paper size or [?] <ANSI A (11.00 x 8.50 Inches)>:
[04/19/2019 01:09:45] Enter paper units [Inches/Millimeters] <Millimeters>:
[04/19/2019 01:09:45] Enter drawing orientation [Portrait/Landscape] <Portrait>: Plot upside down? [Yes/No] <No>: Y
[04/19/2019 01:09:45] Enter plot area [Display/Extents/Layout/View/Window] <Layout>:
[04/19/2019 01:09:45] Enter plot scale (Plotted Millimeters=Drawing Units) or [Fit] <1:1>:
[04/19/2019 01:09:45] Enter plot offset (x,y) <0.00,0.00>:
[04/19/2019 01:09:45] Plot with plot styles? [Yes/No] <No>: Y Enter plot style table name or [?] (enter . for none) <>: myCTB.ctb
[04/19/2019 01:10:46] Error: AutoCAD Core Console is shut down due to timeout. [04/19/2019 01:10:47] End script phase.
[04/19/2019 01:10:47] Error: An unexpected error happened during phase CoreEngineExecution of job.
You also can put the CTB download as a reference of your host drawing input argument. Your workitem will look like this:
{
"activityId": "AutoCAD.PlotToPDF+prod",
"arguments": {
"HostDwg": {
"url": "<download url to host drawing>",
"headers": null,
"references": [
{
"localName": "myCTB.ctb",
"references": null,
"verb": "get",
"url": "<download url to ctb>"
}
],
"verb": "get"
},
"Result": {
"headers": null,
"url": "<upload url for result.pdf>",
"verb": "put"
}
}
}
Assume your drawing already has "Plot style table" assigned to a particular customer CTB file. To make the CTB override(s) take effect, you just need to bring the CTB file together with your drawing file to Forge DA service. You can do so by:
1. Create an eTransmit package that includes the drawing file(s) and the CTB file (or any other supporting files you wish, like font files);
2. Specify the URL to the eTransmit zip file instead of the host drawing file as the input argument;
3. You can still use the "AutoCAD.PlotToPDF" activity and your CTB plot style should work then.
Here is an example for v2:
{
"ActivityId": "PlotToPDF",
"Arguments": {
"InputArguments": [
{
"Resource": "{\"UserId\":null,\"Version\":0,\"Resource\":\"http://mystore.mycom.com/download/mydwg.dwg\",\"LocalFileName\":\"myDwg.dwg\",\"RelatedFiles\":[{\"UserId\":null,\"Version\":0,\"Resource\":\"http://mystore.mycom.com/download/myCTB.ctb\",\"LocalFileName\":\"myCTB.ctb\",\"RelatedFiles\":[]}]}",
"Name": "HostDwg",
"ResourceKind": "RemoteFileResource"
}
],
"OutputArguments": [
{
"Name": "Result",
"Resource": "http://mystore.mycom.com/path/item/abcd",
"HttpVerb": "POST"
}
]
}
}

How to define config file variables?

I have a configuration file with:
{path, "/mnt/test/"}.
{name, "Joe"}.
The path and the name could be changed by a user. As I know, there is a way to save those variables in a module by usage of file:consult/1 in
-define(VARIABLE, <parsing of the config file>).
Are there any better ways to read a config file when the module begins to work without making a parsing function in -define? (As I know, according to Erlang developers, it's not the best way to make a complicated functions in -define)
If you need to store config only when you start the application - you may use application config file which is defined in 'rebar.config'
{profiles, [
{local,
[{relx, [
{dev_mode, false},
{include_erts, true},
{include_src, false},
{vm_args, "config/local/vm.args"}]
{sys_config, "config/local/yourapplication.config"}]
}]
}
]}.
more info about this here: rebar3 configuration
next step to create yourapplication.config - store it in your application folder /app/config/local/yourapplication.config
this configuration should have structure like this example
[
{
yourapplicationname, [
{path, "/mnt/test/"},
{name, "Joe"}
]
}
].
so when your application is started
you can get the whole config data with
{ok, "/mnt/test/"} = application:get_env(yourapplicationname, path)
{ok, "Joe"} = application:get_env(yourapplicationname, name)
and now you may -define this variables like:
-define(VARIABLE,
case application:get_env(yourapplicationname, path) of
{ok, Data} -> Data
_ -> undefined
end
).

Generate fake CSV to test with rspec

I want to test my method which import a CSV file.
But I don't know how to generate fake CSV files to test it.
I tried a lot of solution I already found on stack but it's not working in my case.
Here is the csv original file :
firstname,lastname,home_phone_number,mobile_phone_number,email,address
orsay,dup,0154862548,0658965848,orsay.dup#gmail.com,2 rue du pré paris
richard,planc,0145878596,0625147895,richard.planc#gmail.com,45 avenue du general leclerc
person.rb
def self.import_data(file)
filename = File.join Rails.root, file
CSV.foreach(filename, headers: true, col_sep: ',') do |row|
firstname, lastname, home_phone_number, mobile_phone_number, email, address = row
person = Person.find_or_create_by(firstname: row["firstname"], lastname: row['lastname'], address: row['address'] )
if person.is_former_email?(row['email']) != true
person.update_attributes({firstname: row['firstname'], lastname: row['lastname'], home_phone_number: row['home_phone_number'], mobile_phone_number: row['mobile_phone_number'], address: row['address'], email: row['email']})
end
end
end
person_spec.rb :
require "rails_helper"
RSpec.describe Person, :type => :model do
describe "CSV file is valid" do
file = #fake file
it "should read in the csv" do
end
it "should have result" do
end
end
describe "import valid data" do
valid_data_file = #fake file
it "save new people" do
Person.delete_all
expect { Person.import_data(valid_data_file)}.to change{ Person.count }.by(2)
expect(Person.find_by(lastname: 'dup').email).to eq "orsay.dup#gmail.com"
end
it "update with new email" do
end
end
describe "import invalid data" do
invalid_data_file = #fake file
it "should not update with former email" do
end
it "should not import twice from CSV" do
end
end
end
I successfully used the Faked CSV Gem from https://github.com/jiananlu/faked_csv to achieve your purpose of generating a CSV File with fake data.
Follow these steps to use it:
Open your command line (i.e. on OSX open Spotlight with CMD+Space, and enter "Terminal")
Install Faked CSV Gem by running command gem install faked_csv. Note: If using a Ruby on Rails project add gem 'faked_csv' to your Gemfile, and then run bundle install
Validate Faked CSV Gem installed successfully by typing in Bash Terminal faked_csv --version
Create a Configuration File for the Faked CSV Gem and where you define how to generate fake data. For example, the below will generate a CSV file with 200 rows (or edit to as many as you wish) and contain comma separated columns for each field. If the value of field type is prefixed with faker: then refer to the "Usage" section of the Faker Gem https://github.com/stympy/faker for examples.
my_faked_config.csv.json
{
"rows": 200,
"fields": [
{
"name": "firstname",
"type": "faker:name:first_name",
"inject": ["luke", "dup", "planc"]
},
{
"name": "lastname",
"type": "faker:name:last_name",
"inject": ["schoen", "orsay", "richard"]
},
{
"name": "home_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "mobile_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "email",
"type": "faker:internet:email"
},
{
"name": "address",
"type": "faker:address:street_address",
"rotate": 200
}
]
}
Run the following command to use the configuration file my_faked_config.csv.json to generate a CSV file in the current folder named my_faked_data.csv, which contains the fake data faked_csv -i my_faked_config.csv.json -o my_faked_data.csv
Since the generated file may not include the associated Label for each column after generation, simply manually insert the following line at the top of my_faked_data.csv firstname,lastname,home_phone_number,mobile_phone_number,email,address
Review the final contents of the my_faked_data.csv CSV file containing the fake data, which should appear similar to the following:
my_faked_data.csv
firstname,lastname,home_phone_number,mobile_phone_number,email,address
Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission
Hanna,Barton,9424088332,8720530995,anabel#moengoyette.name,874 Leannon Ways
Mortimer,Stokes,5645028548,9662617821,moses#kihnlegros.org,566 Wilderman Falls
Camden,Langworth,2622619338,1951547890,vincenza#gaylordkemmer.info,823 Esmeralda Pike
Nikolas,Hessel,5476149226,1051193757,jonathon#ziemannnitzsche.name,276 Reinger Parks
...
Modify your person_spec.rb Unit Test using the technique shown below, which passes in Mock data to test functionality of the import_data function of your person.rb file
person_spec.rb
require 'rails_helper'
RSpec.describe Person, type: :model do
describe 'Class' do
subject { Person }
it { should respond_to(:import_data) }
let(:data) { "firstname,lastname,home_phone_number,mobile_phone_number,email,address\r1,Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission" }
describe "#import_data" do
it "save new people" do
File.stub(:open).with("filename", {:universal_newline=>false, :headers=>true}) {
StringIO.new(data)
}
Product.import("filename")
expect(Product.find_by(firstname: 'Kyler').mobile_phone_number).to eq 7804878030
end
end
end
end
Note: I used it myself to generate a large CSV file with meaningful fake data for my Ruby on Rails CSV app. My app allows a user to upload a CSV file containing specific column names and persist it to a PostgreSQL database and it then displays the data in a Paginated table view with the ability to Search and Sort using AJAX.
Use openoffice or excel and save the file out as a .csv file in the save options. A spreadsheet progam.