I am making a python script with jython and I need to use the json module that dosent exist in jython 2.5 . Do any of you guys know a way to include a module as a single file that can be moved around with the script without installing it on the host's jython . I was planning on using the simple json module i found on pypi
If it helps.
Try http://opensource.xhaus.com/projects/jyson
A fast JSON codec for jython 2.5, written in java.
Jython 2.7.0 now includes the standard library json module, which is reasonably fast now that it has been ported to Java. I ran the JSON benchmarks in the standard Python benchmark suite:
### json_dump ###
Min: 0.385395 -> 0.634000: 1.65x slower
Avg: 0.388340 -> 0.831400: 2.14x slower
Significant (t=-3.59)
Stddev: 0.00331 -> 0.27605: 83.3334x larger
### json_dump_v2 ###
Min: 2.642799 -> 3.480000: 1.32x slower
Avg: 2.680320 -> 3.715000: 1.39x slower
Significant (t=-6.72)
Stddev: 0.04087 -> 0.34167: 8.3607x larger
### json_load ###
Min: 0.816147 -> 2.266000: 2.78x slower
Avg: 0.832826 -> 2.578800: 3.10x slower
Significant (t=-8.27)
Stddev: 0.01652 -> 0.47203: 28.5677x larger
Other options like GSon, Jackson, or Jyson would likely be faster, given the API of the json module.
Related
I've written a CodeName One application in NetBeans and I'm testing via the Simulator.
I have a local SQLite database and can execute a simple query in my application e.g.
SELECT *
FROM tempJSON;
When I try to introduce a function (e.g. json_tree) from the JSON1 Extension (https://www.sqlite.org/json1.html) e.g.
SELECT j.value
FROM tempJSON AS d
JOIN json_tree(d.textJSON) AS j
WHERE j.key = 'RunnerName';
I receive the following error.:
java.io.IOException: [SQLITE_ERROR] SQL error or missing database (near "(": syntax error)
Note: both queries execute successfully in SQLiteStudio
What am I missing? (e.g. a configuration issue)
Or is this not possible (yet)?
You can't use extensions in the standard SQLite. On the device we use the builtin sqlite versions and they differ a bit between iOS/Android so relying on an extension that might not be there is problematic.
As a solution we did this: https://www.codenameone.com/blog/spatial-pluggable-sqlite.html
This was done for spatial extensions but the concept is identical if you want to support JSON extensions: bundle your own copy of sqlite.
I want to convert this NSFW model to CoreML model. What I did:
Download Anaconda 2.7
Install coremltools
Convert this yahoo nsfw model from here - https://github.com/yahoo/open_nsfw/tree/master/nsfw_model but I am not sure it’s Caffe v1 because Apple documentation says that only this version supported. Anyway…
I use this commands for conversion and it converted without any warnings.
coreml_model = coremltools.converters.caffe.convert(('resnet_50_1by2_nsfw.caffemodel', 'deploy.prototxt'), image_input_names='data')
coreml_model.save(’nsfw2.mlmodel')
I imported this model to my project and again all looks fine.
I prepared 224x224 images and use Vision framework like VNImageRequestHandler with cgImage and etc.
But!
All images return the same result
[<VNCoreMLFeatureValueObservation: 0x281b1daa0> 2E00F417-95C0-4AA1-A621-A0945BB5E095 requestRevision=1 confidence=1.000000 "prob" - "MultiArray : Double 1 x 1 x 2 x 1 x 1 array" (1.000000)]
How can I debug this issue and found out what’s wrong?
Maybe you're looking only at naughty images? ;-)
It's probably the image preprocessing. You didn't specify any preprocessing options while Caffe models usually normalize using ImageNet mean/std. Refer to my blog post for more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/
However, I don't see any normalization options in your deploy.prototxt, so perhaps it's not that.
How I would debug this: remove everything but the first layer from the Caffe model and convert to Core ML. Run this one-layer model in both Caffe and Core ML and compare the outputs. If they are different, something is up with how you're loading or preprocessing the input data.
on perl5 if someone want to parse binary file he has the pack/unpack utiltiy where he can convert binary structure to perl variables and vice verca ,
is there now a production equivlant for pack/unpack on perl6 ,as from the documentation i found that there are pack/unpack methods for Perl6 but they are experimental ,
does anyone know the status of those functions and if there are alternative to parse binary file which contains a list of records on perl6 ?
You are correct, the pack/unpack methods are experimental; there is currently no other method that is recommended in their place, however.
The experimental flag indicates that the Perl 6 dev team may change the interface. pack & unpack were marked in this way because there was not enough time to review and update the interface before the Christmas release in 2015-12.
I am using rakudo:
use experimental :pack;
pack("C*", [1, 2, 3]); => Buf:0x<01>
I am not sure this is correct use. I expect all bytes get packed in.
I am trying to use the Ocaml csv library. I downloaded csv-1.2.3 and followed the installation instructions after installing findlib:
Uncompress the source archive and go to the root of the package,
Run 'ocaml setup.ml -configure',
Run 'ocaml setup.ml -build',
Run 'ocaml setup.ml -install'
Now I have META, csv.a, csv.cma, csv.cmi, csv.cmx, csv.cmxa, csv.mli files in ~/opt/lib/ocaml/site-lib/csv repertory. The shell command ocamlfind list -describe gives csv A pure OCaml library to read and write CSV files. (version: 1.2.3) which I believe means that csv is installed properly.
BUT when I add
let data = Csv.load "foo.csv" in
in my compute.ml module and try to compile it within the larger program package I have the compilation error :
File "_none_", line 1, characters 0-1:
Error: No implementations provided for the following modules:
Csv referenced from compute.cmx"
and if I simply type
let data = load "foo.csv" in
i get :
File "compute.ml", line 74, characters 13-17:
Error: Unbound value load
I have the same type of errors when I use Csv.load or load directly in the Ocaml terminal. Would somebody have an idea of what is wrong in my code or library installation?
My guess is that you're using ocamlfind for compilation (ocamlfind ocamlc -package csv ...), because you have a linking error, not a type-checking one (which would be the case if you had not specified at all where csv is). The solution may be, in this case, to add a -linkall option to the final compilation line producing an executable, to ask it to link csv.cmx with it. Otherwise, please try to use ocamlfind and yes, tell us what your compilation command is.
For the toplevel, it is very easy to use ocamlfind from it. Watch this toplevel interaction:
% ocaml
Objective Caml version 3.12.1
# #use "topfind";;
- : unit = ()
Findlib has been successfully loaded. Additional directives:
#require "package";; to load a package
#list;; to list the available packages
#camlp4o;; to load camlp4 (standard syntax)
#camlp4r;; to load camlp4 (revised syntax)
#predicates "p,q,...";; to set these predicates
Topfind.reset();; to force that packages will be reloaded
#thread;; to enable threads
- : unit = ()
# #require "csv";;
/usr/lib/ocaml/csv: added to search path
/usr/lib/ocaml/csv/csv.cma: loaded
# Csv.load;;
- : ?separator:char -> ?excel_tricks:bool -> string -> Csv.t = <fun>
To be explicit. What I typed once in the toplevel was:
#use "topfind";;
#require "csv";;
Csv.load;; (* or anything else that uses Csv *)
What is an alternative to autotools in Haskell world? I want to be able to choose between different configurations of the same source code.
For example, there are at least two implementations of MD5 in Haskell: Data.Digest.OpenSSL.MD5 and Data.Digest.Pure.MD5. I'd like to write code in such a way that it can figure out which library is already installed, and didn't require to install the other.
In C I can use Autotools/Scons/CMake + cpp. In Python I can catch ImportError. Which tools should I use in Haskell?
In Haskell you use Cabal configurations. At your project top-level directory, you put a file with the extension .cabal, e.g., <yourprojectname>.cabal. The contents are roughly:
Name: myfancypackage
Version: 0.0
Description: myfancypackage
License: BSD3
License-file: LICENSE
Author: John Doe
Maintainer: john#example.com
Build-Type: Simple
Cabal-Version: >=1.4
Flag pure-haskell-md5
Description: Choose the purely Haskell MD5 implementation
Default: False
Executable haq
Main-is: Haq.hs
Build-Depends: base-4.*
if flag(pure-haskell-md5)
Build-Depends: pureMD5-0.2.*
else
Build-Depends: hopenssl-1.1.*
The Cabal documentation has more details, in particular the section on Configurations.
As nominolo says, Cabal is the tool to use. In particular, the 'configurations" syntax.