How to define config file variables? - configuration

I have a configuration file with:
{path, "/mnt/test/"}.
{name, "Joe"}.
The path and the name could be changed by a user. As I know, there is a way to save those variables in a module by usage of file:consult/1 in
-define(VARIABLE, <parsing of the config file>).
Are there any better ways to read a config file when the module begins to work without making a parsing function in -define? (As I know, according to Erlang developers, it's not the best way to make a complicated functions in -define)

If you need to store config only when you start the application - you may use application config file which is defined in 'rebar.config'
{profiles, [
{local,
[{relx, [
{dev_mode, false},
{include_erts, true},
{include_src, false},
{vm_args, "config/local/vm.args"}]
{sys_config, "config/local/yourapplication.config"}]
}]
}
]}.
more info about this here: rebar3 configuration
next step to create yourapplication.config - store it in your application folder /app/config/local/yourapplication.config
this configuration should have structure like this example
[
{
yourapplicationname, [
{path, "/mnt/test/"},
{name, "Joe"}
]
}
].
so when your application is started
you can get the whole config data with
{ok, "/mnt/test/"} = application:get_env(yourapplicationname, path)
{ok, "Joe"} = application:get_env(yourapplicationname, name)
and now you may -define this variables like:
-define(VARIABLE,
case application:get_env(yourapplicationname, path) of
{ok, Data} -> Data
_ -> undefined
end
).

Related

Avoid generation of build.gradle when using swagger-codegen-plugin

We are using gradle.plugin.org.detoeuf:swagger-codegen-plugin.
We want to change the content of the build.gradle file in the output directory.
We added the gradle.build file to the .swagger-codegen-ignore but BOTH .swagger-codegen-ignore file and gradle.build file are re-created every time we call the swagger task.
Our swagger section looks like this
swagger {
inputSpec = "${project.projectDir}/swagger/backoffice-service-api-swagger.json"
outputDir = file("${project.projectDir}/../backoffice-service-api-client/")
lang = 'java'
additionalProperties = [
'invokerPackage' : 'com.aaa.bbb.backoffice.service',
'modelPackage' : 'com.aaa.bbb.backoffice.service.model',
'apiPackage' : 'com.aaa.bbb.backoffice.service.api',
'dateLibrary' : 'joda',
'groupId' : 'com.aaa.bbb',
'artifactId' : 'backoffice-service-api-client',
'artifactVersion' : '1.0.0',
'hideGenerationTimestamp': 'true',
'dateLibrary' : 'java8'
]
systemProperties = ["apiTests" : "false"]
}
.swagger-codegen-ignore file looks like this -
# Swagger Codegen Ignore
# Generated by swagger-codegen https://github.com/swagger-api/swagger-codegen
# Use this file to prevent files from being overwritten by the generator.
# The patterns follow closely to .gitignore or .dockerignore.
# As an example, the C# client generator defines ApiClient.cs.
# You can make changes and tell Swagger Codgen to ignore just this file by uncommenting the following line:
#ApiClient.cs
# You can match any string of characters against a directory, file or extension with a single asterisk (*):
#foo/*/qux
# The above matches foo/bar/qux and foo/baz/qux, but not foo/bar/baz/qux
# You can recursively match patterns against a directory, file or extension with a double asterisk (**):
#foo/**/qux
# This matches foo/bar/qux, foo/baz/qux, and foo/bar/baz/qux
# You can also negate patterns with an exclamation (!).
# For example, you can ignore all files in a docs folder with the file extension .md:
#docs/*.md
# Then explicitly reverse the ignore rule for a single file:
#!docs/README.md
build.gradle
You can add ignoreFileOverride option in additionalProperties as below. The files provided in ignoreFileOverride option do not exists in project, then swagger-codegen will generate them and if files exist in project then swagger-codegen will ignore them.
swagger {
inputSpec = "${project.projectDir}/swagger/backoffice-service-api-swagger.json"
outputDir = file("${project.projectDir}/../backoffice-service-api-client/")
lang = 'java'
additionalProperties = [
'invokerPackage' : 'com.aaa.bbb.backoffice.service',
'modelPackage' : 'com.aaa.bbb.backoffice.service.model',
'apiPackage' : 'com.aaa.bbb.backoffice.service.api',
'dateLibrary' : 'joda',
'groupId' : 'com.aaa.bbb',
'artifactId' : 'backoffice-service-api-client',
'artifactVersion' : '1.0.0',
'hideGenerationTimestamp': 'true',
'dateLibrary' : 'java8',
'ignoreFileOverride' : '.swagger-codegen-ignore,build.gradle'
]
systemProperties = ["apiTests" : "false"]
}

ERLANG with JSON

I run following command in erlang,
os:cmd("curl -k -X GET http://10.210.12.154:10065/iot/get/task").
It gives a JSON output like this,
{"data":[
{"id":1,"task":"Turn on the bulb when the temperature in greater than 28","working_condition":1,"depending_value":"Temperature","action":"123"},
{"id":2,"task":"Trun on the second bulb when the temperature is greater than 30","working_condition":0,"depending_value":"Temperature","action":"124"}
]}
I want to categorize this data to Id, task, depending_value, action. It is like putting them in to a table. I want to easily find what is the depending value, working condition & action for Id=1. How can I do this?
It gives a JSON output like this.
{"data":[{"id":1,"t ...
Highly doubtful. The docs say that os:cmd() returns a string, which does not start with a {. Note also that a string is not even an erlang data type, rather double quotes are a shortcut for creating a list of integers, and a list of integers is not terribly useful in your case.
Here are two options:
Call list_to_binary() on the list of integers returned by os:cmd() to covert to a binary.
Instead of os:cmd(), use an erlang http client, like hackney, which will return the json as a binary.
The reason you want a binary is because then you can use an erlang json module, like jsx, to convert the binary into an erlang map (which might be what you are after?).
Here's what that will look like:
3> Json = <<"{\"data\": [{\"x\": 1, \"y\": 2}, {\"a\": 3, \"b\": 4}] }">>.
<<"{\"data\": [{\"x\": 1, \"y\": 2}, {\"a\": 3, \"b\": 4}] }">>
4> Map = jsx:decode(Json, [return_maps]).
#{<<"data">> =>
[#{<<"x">> => 1,<<"y">> => 2},#{<<"a">> => 3,<<"b">> => 4}]}
5> Data = maps:get(<<"data">>, Map).
[#{<<"x">> => 1,<<"y">> => 2},#{<<"a">> => 3,<<"b">> => 4}]
6> InnerMap1 = hd(Data).
#{<<"x">> => 1,<<"y">> => 2}
7> maps:get(<<"x">>, InnerMap1).
1
...putting them in to a table. I want to easily find what is the
depending value, working condition & action for Id=1.
Erlang has various table implementations: ets, dets, and mnesia. Here is an ets example:
-module(my).
-compile(export_all).
get_tasks() ->
Method = get,
%See description of this awesome website below.
URL = <<"https://my-json-server.typicode.com/7stud/json_server/db">>,
Headers = [],
Payload = <<>>,
Options = [],
{ok, 200, _RespHeaders, ClientRef} =
hackney:request(Method, URL, Headers, Payload, Options),
{ok, Body} = hackney:body(ClientRef),
%{ok, Body} = file:read_file('json/json.txt'), %Or, for testing you can paste the json in a file (without the outer quotes), and read_file() will return a binary.
Map = jsx:decode(Body, [return_maps]),
_Tasks = maps:get(<<"data">>, Map).
create_table(TableName, Tuples) ->
ets:new(TableName, [set, named_table]),
insert(TableName, Tuples).
insert(_Table, []) ->
ok;
insert(Table, [Tuple|Tuples]) ->
#{<<"id">> := Id} = Tuple,
ets:insert(Table, {Id, Tuple}),
insert(Table, Tuples).
retrieve_task(TableName, Id) ->
[{_Id, Task}] = ets:lookup(TableName, Id),
Task.
By default, an ets set type table ensures that the first position in the inserted tuple is the unique key (or you can explicitly specify another position in the tuple as the unique key).
** If you have a github account, I discovered a really cool website that allows you to place a json file in a new repository on github, and the website will serve up that file as json. Check it out at https://my-json-server.typicode.com:
How to
Create a repository on GitHub (<your-username>/<your-repo>)
Create a db.json file [in the repository].
Visit https://my-json-server.typicode.com/<your-username>/<your-repo> to
access your server
You can see the url I'm using in the code, which can be obtained by clicking on the link at the provided server page and copying the url in your web browser's address bar.
In the shell:
.../myapp$ rebar3 shell
===> Verifying dependencies...
===> Compiling myapp
src/my.erl:2: Warning: export_all flag enabled - all functions will be exported
Erlang/OTP 20 [erts-9.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe] [kernel-poll:false]
Eshell V9.3 (abort with ^G)
1> ===> The rebar3 shell is a development tool; to deploy applications in production, consider using releases (http://www.rebar3.org/docs/releases)
===> Booted unicode_util_compat
===> Booted idna
===> Booted mimerl
===> Booted certifi
===> Booted ssl_verify_fun
===> Booted metrics
===> Booted hackney
1> Tasks = my:get_tasks().
[#{<<"action">> => <<"123">>,
<<"depending_value">> => <<"Temperature">>,<<"id">> => 1,
<<"task">> =>
<<"Turn on the bulb when the temperature in greater than 28">>,
<<"working_condition">> => 1},
#{<<"action">> => <<"124">>,
<<"depending_value">> => <<"Temperature">>,<<"id">> => 2,
<<"task">> =>
<<"Trun on the second bulb when the temperature is greater than 30">>,
<<"working_condition">> => 0}]
2> my:create_table(tasks, Tasks).
ok
3> my:retrieve_task(tasks, 1).
#{<<"action">> => <<"123">>,
<<"depending_value">> => <<"Temperature">>,<<"id">> => 1,
<<"task">> =>
<<"Turn on the bulb when the temperature in greater than 28">>,
<<"working_condition">> => 1}
4> my:retrieve_task(tasks, 2).
#{<<"action">> => <<"124">>,
<<"depending_value">> => <<"Temperature">>,<<"id">> => 2,
<<"task">> =>
<<"Trun on the second bulb when the temperature is greater than 30">>,
<<"working_condition">> => 0}
5> my:retrieve_task(tasks, 3).
** exception error: no match of right hand side value []
in function my:retrieve_task/2 (/Users/7stud/erlang_programs/old/myapp/src/my.erl, line 58)
6>
Note that the id is over to the right at the end of one of the lines. Also, if you get any errors in the shell, the shell will automatically restart a new process and the ets table will be destroyed, so you have to create it anew.
rebar.config:
{erl_opts, [debug_info]}.
{deps, [
{jsx, "2.8.0"},
{hackney, ".*", {git, "git://github.com/benoitc/hackney.git", {branch, "master"}}}
]}.
{shell, [{apps, [hackney]}]}. % This causes the shell to automatically start the listed apps. See https://stackoverflow.com/questions/40211752/how-to-get-an-erlang-app-to-run-at-starting-rebar3/45361175#comment95565011_45361175
src/myapp.app.src:
{application, 'myapp',
[{description, "An OTP application"},
{vsn, "0.1.0"},
{registered, []},
{mod, {'myapp_app', []}},
{applications,
[kernel,
stdlib
]},
{env,[]},
{modules, []},
{contributors, []},
{licenses, []},
{links, []}
]}.
But, according to the rebar3 dependencies docs:
You should add each dependency to your app or app.src files:
So, I guess src/myapp.app.src should look like this:
{application, 'myapp',
[{description, "An OTP application"},
{vsn, "0.1.0"},
{registered, []},
{mod, {'myapp_app', []}},
{applications,
[kernel,
stdlib,
jsx,
hackney
]},
{env,[]},
{modules, []},
{contributors, []},
{licenses, []},
{links, []}
]}.

Postgrex how to define json library

I'm just trying to use Postgrex without any kind of ecto setup, so just the example from the documentation readme.
Here is what my module looks like:
defmodule Receive do
def start(_type, _args) do
{:ok, pid} = Postgrex.start_link(
hostname: "localhost",
username: "john",
# password: "",
database: "property_actions",
extensions: [{Postgrex.Extensions.JSON}]
)
Postgrex.query!(
pid,
"INSERT INTO actions (search_terms) VALUES ($1)",
[
%{foo: 'bar'}
]
)
end
end
when I run the code I get
** (RuntimeError) type `json` can not be handled by the types module Postgrex.DefaultTypes, it must define a `:json` library in its options to support JSON types
Is there something I'm not setting up correctly? From what I've gathered in the documentation, I shouldn't even need to have that extensions line because json is handled by default.
On Postgrex <= 0.13, you need to define your own types:
Postgrex.Types.define(MyApp.PostgrexTypes, [], json: Poison)
and then when starting Postgrex:
Postgrex.start_link(types: MyApp.PostgrexTypes)
On Postgrex >= 0.14 (currently master), it was made easier:
config :postgrex, :json_library, Poison

Generate fake CSV to test with rspec

I want to test my method which import a CSV file.
But I don't know how to generate fake CSV files to test it.
I tried a lot of solution I already found on stack but it's not working in my case.
Here is the csv original file :
firstname,lastname,home_phone_number,mobile_phone_number,email,address
orsay,dup,0154862548,0658965848,orsay.dup#gmail.com,2 rue du pré paris
richard,planc,0145878596,0625147895,richard.planc#gmail.com,45 avenue du general leclerc
person.rb
def self.import_data(file)
filename = File.join Rails.root, file
CSV.foreach(filename, headers: true, col_sep: ',') do |row|
firstname, lastname, home_phone_number, mobile_phone_number, email, address = row
person = Person.find_or_create_by(firstname: row["firstname"], lastname: row['lastname'], address: row['address'] )
if person.is_former_email?(row['email']) != true
person.update_attributes({firstname: row['firstname'], lastname: row['lastname'], home_phone_number: row['home_phone_number'], mobile_phone_number: row['mobile_phone_number'], address: row['address'], email: row['email']})
end
end
end
person_spec.rb :
require "rails_helper"
RSpec.describe Person, :type => :model do
describe "CSV file is valid" do
file = #fake file
it "should read in the csv" do
end
it "should have result" do
end
end
describe "import valid data" do
valid_data_file = #fake file
it "save new people" do
Person.delete_all
expect { Person.import_data(valid_data_file)}.to change{ Person.count }.by(2)
expect(Person.find_by(lastname: 'dup').email).to eq "orsay.dup#gmail.com"
end
it "update with new email" do
end
end
describe "import invalid data" do
invalid_data_file = #fake file
it "should not update with former email" do
end
it "should not import twice from CSV" do
end
end
end
I successfully used the Faked CSV Gem from https://github.com/jiananlu/faked_csv to achieve your purpose of generating a CSV File with fake data.
Follow these steps to use it:
Open your command line (i.e. on OSX open Spotlight with CMD+Space, and enter "Terminal")
Install Faked CSV Gem by running command gem install faked_csv. Note: If using a Ruby on Rails project add gem 'faked_csv' to your Gemfile, and then run bundle install
Validate Faked CSV Gem installed successfully by typing in Bash Terminal faked_csv --version
Create a Configuration File for the Faked CSV Gem and where you define how to generate fake data. For example, the below will generate a CSV file with 200 rows (or edit to as many as you wish) and contain comma separated columns for each field. If the value of field type is prefixed with faker: then refer to the "Usage" section of the Faker Gem https://github.com/stympy/faker for examples.
my_faked_config.csv.json
{
"rows": 200,
"fields": [
{
"name": "firstname",
"type": "faker:name:first_name",
"inject": ["luke", "dup", "planc"]
},
{
"name": "lastname",
"type": "faker:name:last_name",
"inject": ["schoen", "orsay", "richard"]
},
{
"name": "home_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "mobile_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "email",
"type": "faker:internet:email"
},
{
"name": "address",
"type": "faker:address:street_address",
"rotate": 200
}
]
}
Run the following command to use the configuration file my_faked_config.csv.json to generate a CSV file in the current folder named my_faked_data.csv, which contains the fake data faked_csv -i my_faked_config.csv.json -o my_faked_data.csv
Since the generated file may not include the associated Label for each column after generation, simply manually insert the following line at the top of my_faked_data.csv firstname,lastname,home_phone_number,mobile_phone_number,email,address
Review the final contents of the my_faked_data.csv CSV file containing the fake data, which should appear similar to the following:
my_faked_data.csv
firstname,lastname,home_phone_number,mobile_phone_number,email,address
Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission
Hanna,Barton,9424088332,8720530995,anabel#moengoyette.name,874 Leannon Ways
Mortimer,Stokes,5645028548,9662617821,moses#kihnlegros.org,566 Wilderman Falls
Camden,Langworth,2622619338,1951547890,vincenza#gaylordkemmer.info,823 Esmeralda Pike
Nikolas,Hessel,5476149226,1051193757,jonathon#ziemannnitzsche.name,276 Reinger Parks
...
Modify your person_spec.rb Unit Test using the technique shown below, which passes in Mock data to test functionality of the import_data function of your person.rb file
person_spec.rb
require 'rails_helper'
RSpec.describe Person, type: :model do
describe 'Class' do
subject { Person }
it { should respond_to(:import_data) }
let(:data) { "firstname,lastname,home_phone_number,mobile_phone_number,email,address\r1,Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission" }
describe "#import_data" do
it "save new people" do
File.stub(:open).with("filename", {:universal_newline=>false, :headers=>true}) {
StringIO.new(data)
}
Product.import("filename")
expect(Product.find_by(firstname: 'Kyler').mobile_phone_number).to eq 7804878030
end
end
end
end
Note: I used it myself to generate a large CSV file with meaningful fake data for my Ruby on Rails CSV app. My app allows a user to upload a CSV file containing specific column names and persist it to a PostgreSQL database and it then displays the data in a Paginated table view with the ability to Search and Sort using AJAX.
Use openoffice or excel and save the file out as a .csv file in the save options. A spreadsheet progam.

RabbitMQ Shovel config with Alternate-Exchange

I'm trying to configure the Shovel plugin for RabbitMQ with a list of declarations. I have configured my remote exchange to have an alternate-exchange when I created it.
My problem is that I can't get the config file for shovel to include this argument so RabbitMQ crashes upon startup.
This is what my config looks like:
[
{mnesia, [{dump_log_write_threshold, 100}]},
{rabbit, [{vm_memory_high_watermark, 0.4}]},
{rabbitmq_shovel,
[{shovels,
[{call_stats_shovel,
[{sources, [{broker, "amqp://guest:guest#localhost:5672/test"},
{declarations,
[{'queue.declare', [{queue, <<"incoming">>}, durable]},
{'exchange.declare',[{exchange, <<"my-exchange-topic">>},{type, <<"topic">>},durable]},
{'queue.bind',[{exchange, <<"my-exchange-topic">>},{queue, <<"incoming">>}]}
]}]},
{destinations, [{broker, "amqp://guest:guest#172.16.3.162:5672/blah"},
{declarations,
[
{'queue.declare',[{queue, <<"billing">>},durable]},
{'exchange.declare',[{exchange, <<"my-exchange-topic">>},{type, <<"topic">>},{alternate_exchange, <<"alt">>}, durable]},
{'queue.bind',[{exchange, <<"my-exchange-topic">>},{queue, <<"billing">>},{routing_key, <<"physical">>}]}
]}
]},
{queue, <<"incoming">>},
{ack_mode, no_ack},
{publish_properties, [{delivery_mode, 2}]},
{reconnect_delay, 5}
]}
]
}]
}
].
The problem is on the destination exchange called my-exchange-topic. If I take out the declarations section then the config file works.
This is the error:
=INFO REPORT==== 31-Jul-2012::12:15:25 ===
application: rabbitmq_shovel
exited: {{invalid_shovel_configuration,call_stats_shovel,
{invalid_parameter_value,destinations,
{unknown_fields,'exchange.declare',
[alternate_exchange]}}},
{rabbit_shovel,start,[normal,[]]}}
type: permanent
If I leave the alternate_exchange section out of the declaration I get this error in RabbitMQ web management:
{{shutdown,
{server_initiated_close,406,
<<"PRECONDITION_FAILED - inequivalent arg 'alternate-exchange'for exchange 'my-exchange-topic' in vhost 'blah':
received none but current is the value 'alt' of type 'longstr'">>}},
{gen_server,call,
[<0.473.0>,
{call,
{'exchange.declare',0,<<"my-exchange-topic">>,<<"topic">>,false,
true,false,false,false,[]},
none,<0.444.0>},
infinity]}}
For anyone looking at how to configure exchanges and queues that require additional arguments you do it like this:
{'exchange.declare',[{exchange, <<"my-exchange-topic">>},{type, <<"topic">>}, durable, {arguments, [{<<"alternate-exchange">>, longstr, <<"alternate-exchange">>}]} ]},
you can do a similar thing with queues:
{'queue.declare',[{queue, <<"my-queue">>},durable, {arguments, [{<<"x-dead-letter-exchange">>, longstr, <<"dead-letter-queue">>}]}]}
For clarification of the comment above, in case of an exchange2exchange shovel, the config would be:
{'exchange.declare',[{exchange, <<"my-exchange-topic">>},{type, <<"topic">>}, durable, {arguments, [{<<"alternate-exchange">>, longstr, <<"name-of-your-alternate-exchange">>}]} ]},