sbt-android gives "IllegalArgumentException already added..." running android:package for multi-project build - sbt-android

I have an sbt root project with an actionbarsherlock subproject which I can't package into an apk.
I am able to build both projects successfully, but when I run android:package I get errors from the root/android:dex task where classes from actionbarsherlock are being dex'd twice:
Uncaught translation error: java.lang.IllegalArgumentException: already added: Lcom/actionbarsherlock/ActionBarSherlock$Implementation;
I ran last root/android:dex and found that it is including the intermediates/class.jar from both the root project as well as the subproject:
.../actionbarsherlock/bin/intermediates/classes.jar,
.../bin/intermediates/classes.jar
That explains the dex error but I don't know how to change my build to avoid that.
I was able to replicate this issue on a much simpler project where the root project has no source and a simple build config like:
// build.sbt for root
androidBuild
javacOptions in Compile ++= "-source" :: "1.7" :: "-target" :: "1.7" :: Nil
lazy val root = project.in(file(".")).dependsOn(abs)
lazy val abs = project.in(file("actionbarsherlock"))
and:
// build.sbt for subproject
androidBuild
javacOptions in Compile ++= "-source" :: "1.7" :: "-target" :: "1.7" :: Nil
libraryDependencies ++= Seq(
"com.android.support" % "support-v4" % "18.0.0"
)
Both projects have project/build.properties:
sbt.version=0.13.9
and project/plugins.sbt:
addSbtPlugin("org.scala-android" % "sbt-android" % "1.6.0")
The file hierarchy looks like:
.
├── actionbarsherlock
│   ├── AndroidManifest.xml
│   ├── build.sbt
│   ├── lint.xml
│   ├── project
│   │   ├── build.properties
│   │   └── plugins.sbt
│   ├── project.properties
│   ├── README.md
│   ├── res
│ ├── ...
│   ├── src
│   │   ├── ...
│   └── test
│   └── ...
├── AndroidManifest.xml
├── build.sbt
├── lint.xml
├── proguard-project.txt
├── project
│   ├── build.properties
│   └── plugins.sbt
├── project.properties
├── README.MD
├── res
│   ├── ...
├── src
The typical process I have been using on sbt from a clean checkout is:
project abs
compile
project root
compile
android:package
Thanks in advance!

Remove the reference to actionbarsherlock in project.properties. The actionbarsherlock project is automatically built because it is in project.properties, it is duplicated because you added a manual subproject.
Everything would have just worked if you never setup all the sbt project stuff in the actionbarsherlock project. Given that you did set it up, you did not make proper use of the androidBuildWith function for dependent projects. Remove the androidBuild line from the root project and change the root project in line to project.in(file(".")).androidBuildWith(abs)
Also, consider using the actionbarsherlock apklib rather than a subproject. On top of that, move away from actionbarsherlock altogether.
Additionally, you can build fully by just running sbt android:package from a clean build. No need for all those steps.

Related

Access JSON values using Chef and test-kitchen

I'm new to Chef and Test-kitchen and I'm trying to use random JSON file as attribute or environment (preferably attribute), but unfortunately I can't access the JSON values from the recipes.
I'm using the following directory structure:
uat
├── attributes
│   ├── dev.json
│   ├── .kitchen
│   │   └── logs
│   │   └── kitchen.log
│   └── prod.json
├── Berksfile
├── Berksfile.lock
├── chefignore
├── environments
│   ├── dev.json
│   └── prod.json
├── Gemfile
├── Gemfile.lock
├── .kitchen
│   ├── default-windows.yml
│   └── logs
│   ├── default-windows.log
│   └── kitchen.log
├── .kitchen.yml
├── metadata.rb
└── recipes
   ├── default.rb
   ├── prep.rb
   └── service_install.rb
This is the .kitchen.yml:
---
driver:
name: machine
username: sample_user
password: sample_pass
hostname: 192.168.1.102
port: 5985
provisioner:
name: chef_zero
json_attributes: true
environments_path: 'environments/dev'
platforms:
- name: windows
suites:
- name: default
run_list:
- recipe[uat::default]
This is the dev.json:
{
"groupID": "Project-name",
"directoryName": "sample_folder",
"environmentType": "UAT",
}
This is the recipe prep.rb :
directory "C:/Users/test/#{node['directoryName']}" do
recursive true
action :create
end
If I create something.rb in attributes folder and with content: default['directoryName'] = 'sample_folder', it works like a charm, but I need to use a JSON file which to store parameters company wide.
Could you please help me find what I'm doing wrong.
So a couple of issues. First, the environments_path points at a folder, not the specific file, so that should just be environments/. Second, it has to be an actual environment object, see https://docs.chef.io/environments.html#json for a description of the schema. Third, you would need to actually apply the environment to the test node:
provisioner:
# Other stuff ...
client_rb:
chef_environment: dev

ERROR: Unknown template parameter: StaticFiles

I get the following error when I try to deploy to aws Elastic Beanstalk.
Printing Status:
INFO: createEnvironment is starting.
INFO: Using elasticbeanstalk-us-west-2-695359152326 as Amazon S3 storage bucket for environment data.
ERROR: InvalidParameterValue: Unknown template parameter: StaticFiles
ERROR: Failed to launch environment.
I am using a preconfigured dockerpython template with the following structure.
.
├── application.config
├── application.py
├── Dockerfile
├── Dockerrun.aws.json
├── iam_policy.json
├── LICENSE.txt
├── misc
├── NOTICE.txt
├── README.md
├── requirements.txt
├── static
│   ├── bootstrap
│   ├── images
│   └── jquery
└── templates
├── aboutus.html
├── clients.html
├── commonheaderincludes.html
├── commonhtmlheader.html
├── footer.html
├── header.html
├── index.html
└── services.html
Please help.

how to let spark 2.0 reading mutli folders parquet like csv

I have some daily data to save to multi folders(mostly based on time). now I have two format to store the files one is parquet and the other is csv , I would like to save to parquet format to save some space.
the folder structure is like following :
[root#hdp raw]# tree
.
├── entityid=10001
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
├── entityid=100055
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
├── entityid=100082
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
└── entityid=10012
└── year=2017
└── quarter=1
└── month=1
├── day=6
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
└── day=7
└── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
now I have a python list stores all the folders need to be read,suppose each time run it need to read only some of the folders base on filter conditions.
folderList=df_inc.collect()
folderString=[]
for x in folderList:
folderString.append(x.folders)
In [44]: folderString
Out[44]:
[u'/data/raw/entityid=100055/year=2017/quarter=1/month=1/day=7',
u'/data/raw/entityid=10012/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=100082/year=2017/quarter=1/month=1/day=7',
u'/data/raw/entityid=100055/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=100082/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=10012/year=2017/quarter=1/month=1/day=7']
the files were writen by :
df_join_with_time.coalesce(1).write.partitionBy("entityid","year","quarter","month","day").mode("append").parquet(rawFolderPrefix)
when I try to read the folders stored in folderString by df_batch=spark.read.parquet(folderString) error java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.String encounters.
if I save the files in csv format and read it through below code it just works fine as following: please if anyway to read the filelist for parquet folder ,much appreciate!
In [46]: folderList=df_inc.collect()
...: folderString=[]
...:
...: for x in folderList:
...: folderString.append(x.folders)
...: df_batch=spark.read.csv(folderString)
...:
In [47]: df_batch.show()
+------------+---+-------------------+----------+----------+
| _c0|_c1| _c2| _c3| _c4|
+------------+---+-------------------+----------+----------+
|6C25B9C3DD54| 1|2017-01-07 00:00:01|1483718401|1483718400|
|38BC1ADB0164| 3|2017-01-06 00:00:01|1483632001|1483632000|
|38BC1ADB0164| 3|2017-01-07 00:00:01|1483718401|1483718400|
You are facing a miss understanding of partition in Hadoop and Parquet.
See, I have a simple file structure partitioned by year-month. It is like this:
my_folder
.
├── year-month=2016-12
| └── my_files.parquet
├── year-month=2016-11
| └── my_files.parquet
If I make a read from my_folder without any filter in my dataframe reader like this:
df = saprk.read.parquet("path/to/my_folder")
df.show()
If you check the Spark DAG visualization you can see that in this case it will read all my partitions as you said:
In the case above, each point in the first square is one partition of my data.
But if I change my code to this:
df = saprk.read.parquet("path/to/my_folder")\
.filter((col('year-month') >= lit(my_date.strftime('%Y-%m'))) &
(col('year-month') <= lit(my_date.strftime('%Y-%m'))))
The DAG visualization will show how many partitions I'm using:
So, if you filter by the column that is the partition you will not read all the files. Just that you need, you don't need to use that solution of reading one folder by folder.
I got this solved by :
df=spark.read.parquet(folderString[0])
y=0
for x in folderString:
if y>0:
df=df.union(spark.read.parquet(x))
y=y+1
it's a very ugly solution ,if you have good idea ,please let me know. many thanks.
few days later,found the perfect way to solve the problem by:
df=spark.read.parquet(*folderString)

Ansible, role not found error

I try to play following playbook against localhost to provision Vagrant machine
---
- hosts: all
become: yes
roles:
- base
- jenkins
I have cloned necessary roles from github and they resides in a relative path roles/{role name}
Executing following command: ansible-playbook -i "localhost," -c local playbook.yml outputs this error:
==> default: ERROR! the role 'geerlingguy.java' was not found in /home/vagrant/provisioning/roles:/home/vagrant/provisioning:/etc/ansible/roles:/home/vagrant/provisioning/roles
==> default:
==> default: The error appears to have been in '/home/vagrant/provisioning/roles/jenkins/meta/main.yml': line 3, column 5, but may
==> default: be elsewhere in the file depending on the exact syntax problem.
==> default:
==> default: The offending line appears to be:
==> default:
==> default: dependencies:
==> default: - geerlingguy.java
==> default: ^ here
I cloned the missing dependency from github, and tried to reside it in relative path of roles/java and roles/geerlingguy/java, but either didn't solve the problem, and error stays the same.
I want to keep all roles locally in the synced provisioning folder, without using ansible-galaxy runtime, to make the provisioning method as self contained as possible.
Here is the provision folder structure as it is now
.
├── playbook.yml
└── roles
├── base
│   └── tasks
│   └── main.yml
├── java
│   ├── defaults
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── README.md
│   ├── tasks
│   │   ├── main.yml
│   │   ├── setup-Debian.yml
│   │   ├── setup-FreeBSD.yml
│   │   └── setup-RedHat.yml
│   ├── templates
│   │   └── java_home.sh.j2
│   ├── tests
│   │   └── test.yml
│   └── vars
│   ├── Debian.yml
│   ├── Fedora.yml
│   ├── FreeBSD.yml
│   ├── RedHat.yml
│   ├── Ubuntu-12.04.yml
│   ├── Ubuntu-14.04.yml
│   └── Ubuntu-16.04.yml
└── jenkins
├── defaults
│   └── main.yml
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── main.yml
│   ├── plugins.yml
│   ├── settings.yml
│   ├── setup-Debian.yml
│   └── setup-RedHat.yml
├── templates
│   └── basic-security.groovy
├── tests
│   ├── requirements.yml
│   ├── test-http-port.yml
│   ├── test-jenkins-version.yml
│   ├── test-plugins-with-pinning.yml
│   ├── test-plugins.yml
│   ├── test-prefix.yml
│   └── test.yml
└── vars
├── Debian.yml
└── RedHat.yml
You should install or clone all required roles in the /roles folder (or in the system folder)
ansible-galaxy install -p ROLES_PATH geerlingguy.java
should fix this specific problem.
However, the best practice should be the use of a requirements.yml file where you require all the needed roles and then install them with ansible-galaxy directly in your playbook.
- name: run ansible galaxy
local_action: command ansible-galaxy install -r requirements.yml --ignore-errors
Simple symbolic link works like a charm without any installations:
$ mkdir /home/USER/ansible && ln -s /home/USER/GIT/ansible-root/roles
Here is the solution: the required path for the role is roles/geerlingguy.java/, not roles/geerlingguy/java/

Mercurial doesn't ignore files in directory I specify

I have mercurial repository. There is .hgignore file:
λ ~/workspace/kompgrafika/nurbs/ cat .hgignore
syntax: regexp
^Makefile
^bin/.*$
CMakeFiles/.*$
^CMakeCache\.txt
^cmake_install\.cmake
There is directory named CMakeFiles that I want to ignore:
λ ~/workspace/kompgrafika/nurbs/ tree CMakeFiles
CMakeFiles
├── 3dfractals.dir
│   ├── build.make
│   ├── cmake_clean.cmake
│   ├── CXX.includecache
│   ├── DependInfo.cmake
│   ├── depend.internal
│   ├── depend.make
│   ├── flags.make
│   ├── link.txt
│    ├── progress.make
│   └── src
│   ├── DisplayControl.cpp.o
│   ├── Drawer.cpp.o
│   ├── main.cpp.o
│   ├── PointFileReader.cpp.o
│   ├── PointGenerator.cpp.o
│   └── Program.cpp.o
├── CMakeCCompiler.cmake
├── cmake.check_cache
├── CMakeCXXCompiler.cmake
├── CMakeDetermineCompilerABI_C.bin
├── CMakeDetermineCompilerABI_CXX.bin
├── CMakeDirectoryInformation.cmake
├── CMakeOutput.log
├── CMakeSystem.cmake
├── CMakeTmp
│   └── CMakeFiles
│   └── cmTryCompileExec.dir
├── CompilerIdC
│   ├── a.out
│   └── CMakeCCompilerId.c
├── CompilerIdCXX
│   ├── a.out
│   └── CMakeCXXCompilerId.cpp
├── Makefile2
├── Makefile.cmake
├── progress.marks
└── TargetDirectories.txt
7 directories, 31 files
But running hg status it does not ignore 3dfractals.dir for some reason.
λ ~/workspace/kompgrafika/nurbs/ hg st
A .hgignore
A docs/pol_10.wings
? CMakeFiles/3dfractals.dir/src/DisplayControl.cpp.o
? CMakeFiles/3dfractals.dir/src/Drawer.cpp.o
? CMakeFiles/3dfractals.dir/src/PointFileReader.cpp.o
? CMakeFiles/3dfractals.dir/src/PointGenerator.cpp.o
? CMakeFiles/3dfractals.dir/src/Program.cpp.o
? CMakeFiles/3dfractals.dir/src/main.cpp.o
I am using:
λ ~/workspace/kompgrafika/nurbs/ hg --version
Mercurial Distributed SCM (version 2.0.2+5-1f9f9b4c2923)
(see http://mercurial.selenic.com for more information)
Copyright (C) 2005-2011 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I also tried changing CMakeFiles/.*$ to ^CMakeFiles$. No results.
Any ideas what's wrong?
Hmm, it works here:
$ cat .hgignore
syntax:regexp
^Makefile
^bin/.*$
CMakeFiles/.*$
^CMakeCache\.txt
^cmake_install\.cmake
$ hg init
$ mkdir -p $(dirname CMakeFiles/3dfractals.dir/src/DisplayControl.cpp.o)
$ touch CMakeFiles/3dfractals.dir/src/DisplayControl.cpp.o
$ touch CMakeFiles/cmake.check_cache
$ hg status
? .hgignore
$ hg status -A
? .hgignore
I CMakeFiles/3dfractals.dir/src/DisplayControl.cpp.o
I CMakeFiles/cmake.check_cache
This is with Mercurial 2.0.2+59, so it should work the same as your version.
One thing that can trip up hg status in the way you see is the inotify extension. As mentioned on its wiki page, it's still to be considered experimental because it's still buggy. Check for inotify with
$ hg showconfig extensions.inotify
and disable it if necessary. If the extension is loaded from your own configuration file (check with hg showconfig --debug) then you can just remove the line that loads it. If it's loaded in a system-wide config file that you cannot change, then add
[extensions]
inotify = !
to your own config file to disable it.
I'm on Windows, but usually
CMakeFiles/*
would do the trick for me...