I have a customized layer with certain parameters like kernel size,pad info etc.. among this I have array parameter which is of type bool/int. I need to add this array parameter in .caffemodel file after processing. How to link/Dump an array into caffemodel file?
I have included array parameter in .proto file like below.
{
... //other parameters
repeated bool/int <varibale_name> [packed = true]
}
Is it possible to create blob of type bool?
You can do it through py-caffe net surgery -
net.params[learnable_layer_name].add_blob()
new_blob_index = len(net.params[learnable_layer_name]) - 1
net.params[learnable_layer_name][new_blob_index].reshape(desired_shape)
net.params[learnable_layer_name][new_blob_index].data[:] = new_data_to_insert
HTH.
Related
I have a file which contains a header(specific to a file used by octave) and I have to read the matrix contained in the file. The idea is that I have a function and the file location is stored in a string let's say location. If I try to do load(location) I get some parse error and that matrix will not be load.
function [extract] = read(location)
matrix = ?;
end
I'm using a service to load my form data into an array in my angular2 app.
The data is stored like this:
arr = []
arr.push({title:name})
When I do a console.log(arr), it is shown as Object. What I need is to see it
as [ { 'title':name } ]. How can I achieve that?
you may use below,
JSON.stringify({ data: arr}, null, 4);
this will nicely format your data with indentation.
To print out readable information. You can use console.table() which is much easier to read than JSON:
console.table(data);
This function takes one mandatory argument data, which must be an array or an object, and one additional optional parameter columns.
It logs data as a table. Each element in the array (or enumerable property if data is an object) will be a row in the table
Example:
first convert your JSON string to Object using .parse() method and then you can print it in console using console.table('parsed sring goes here').
e.g.
const data = JSON.parse(jsonString);
console.table(data);
Please try using the JSON Pipe operator in the HTML file. As the JSON info was needed only for debugging purposes, this method was suitable for me. Sample given below:
<p>{{arr | json}}</p>
You could log each element of the array separately
arr.forEach(function(e){console.log(e)});
Since your array has just one element, this is the same as logging {'title':name}
you can print any object
console.log(this.anyObject);
when you write
console.log('any object' + this.anyObject);
this will print
any object [object Object]
I'm testing data lake for an application I am developing. I'm new to U-SQL and data lake and am just trying to query all records in a JSON file. Right now, It's only returning one record and I'm not sure why because the file has about 200.
My code is:
DECLARE #input string = #"/MSEStream/output/2016/08/12_0_fc829ede3c1d4cf9a3278d43e7e4e9d0.json";
REFERENCE ASSEMBLY [Newtonsoft.Json];
REFERENCE ASSEMBLY [Microsoft.Analytics.Samples.Formats];
#allposts =
EXTRACT
id string
FROM #input
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
#result =
SELECT *
FROM #allposts;
OUTPUT #result
TO "/ProcessedQueries/all_posts.csv"
USING Outputters.Csv();
Data Example:
{
"id":"398507",
"contenttype":"POST",
"posttype":"post",
"uri":"http://twitter.com/etc",
"title":null,
"profile":{
"#class":"PublisherV2_0",
"name":"Company",
"id":"2163171",
"profileIcon":"https://pbs.twimg.com/image",
"profileLocation":{
"#class":"DocumentLocation",
"locality":"Toronto",
"adminDistrict":"ON",
"countryRegion":"Canada",
"coordinates":{
"latitude":43.7217,
"longitude":-31.432},
"quadKey":"000000000000000"},
"displayName":"Name",
"externalId":"00000000000"},
"source":{
"name":"blogs",
"id":"18",
"param":"Twitter"},
"content":{
"text":"Description of post"},
"language":{
"name":"English",
"code":"en"},
"abstracttext":"More Text and links",
"score":{}
}
}
Thank you for the help in advance
The JsonExtractor takes an argument that allows you to specify which items or objects are being mapped into rows using a JSON Path expression. If you don’t specify anything it will take the top root (which is one row).
You want every one of the items in the array, so you specify it as:
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor("[*]");
Where [*] is the JSON Path expression that says give me all the elements of the array which in this case is the top-level array.
If you have a JSON node in your field called id, your original script posted in the question would return the node with name "id" under the rootnode. To get all the nodes, your script will be structured as
#allposts =
EXTRACT
id string,
contenttype string,
posttype string,
uri string,
title string,
profile string
FROM #input
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
Please let us know if it works. The alternative would be to extract it using a native extractor to read it all in a string (as MRys mentioned, as long as your JSON is under 128 KB this would work).
#allposts =
EXTRACT
json string
FROM #input
USING Extractors.Text(delimiter:'\b', quoting:false);
I want to read json or xml file in pyspark.lf my file is split in multiple line in
rdd= sc.textFile(json or xml)
Input
{
" employees":
[
{
"firstName":"John",
"lastName":"Doe"
},
{
"firstName":"Anna"
]
}
Input is spread across multiple lines.
Expected Output {"employees:[{"firstName:"John",......]}
How to get the complete file in a single line using pyspark?
There are 3 ways (I invented the 3rd one, the first two are standard built-in Spark functions), solutions here are in PySpark:
textFile, wholeTextFile, and a labeled textFile (key = file, value = 1 line from file. This is kind of a mix between the two given ways to parse files).
1.) textFile
input:
rdd = sc.textFile('/home/folder_with_text_files/input_file')
output: array containing 1 line of file as each entry ie. [line1, line2, ...]
2.) wholeTextFiles
input:
rdd = sc.wholeTextFiles('/home/folder_with_text_files/*')
output: array of tuples, first item is the "key" with the filepath, second item contains 1 file's entire contents ie.
[(u'file:/home/folder_with_text_files/', u'file1_contents'), (u'file:/home/folder_with_text_files/', file2_contents), ...]
3.) "Labeled" textFile
input:
import glob
from pyspark import SparkContext
SparkContext.stop(sc)
sc = SparkContext("local","example") # if running locally
sqlContext = SQLContext(sc)
for filename in glob.glob(Data_File + "/*"):
Spark_Full += sc.textFile(filename).keyBy(lambda x: filename)
output: array with each entry containing a tuple using filename-as-key with value = each line of file. (Technically, using this method you can also use a different key besides the actual filepath name- perhaps a hashing representation to save on memory). ie.
[('/home/folder_with_text_files/file1.txt', 'file1_contents_line1'),
('/home/folder_with_text_files/file1.txt', 'file1_contents_line2'),
('/home/folder_with_text_files/file1.txt', 'file1_contents_line3'),
('/home/folder_with_text_files/file2.txt', 'file2_contents_line1'),
...]
You can also recombine either as a list of lines:
Spark_Full.groupByKey().map(lambda x: (x[0], list(x[1]))).collect()
[('/home/folder_with_text_files/file1.txt', ['file1_contents_line1', 'file1_contents_line2','file1_contents_line3']),
('/home/folder_with_text_files/file2.txt', ['file2_contents_line1'])]
Or recombine entire files back to single strings (in this example the result is the same as what you get from wholeTextFiles, but with the string "file:" stripped from the filepathing.):
Spark_Full.groupByKey().map(lambda x: (x[0], ' '.join(list(x[1])))).collect()
If your data is not formed on one line as textFile expects, then use wholeTextFiles.
This will give you the whole file so that you can parse it down into whatever format you would like.
This is how you would do in scala
rdd = sc.wholeTextFiles("hdfs://nameservice1/user/me/test.txt")
rdd.collect.foreach(t=>println(t._2))
"How to read whole [HDFS] file in one string [in Spark, to use as sql]":
e.g.
// Put file to hdfs from edge-node's shell...
hdfs dfs -put <filename>
// Within spark-shell...
// 1. Load file as one string
val f = sc.wholeTextFiles("hdfs:///user/<username>/<filename>")
val hql = f.take(1)(0)._2
// 2. Use string as sql/hql
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = hiveContext.sql(hql)
Python way
rdd = spark.sparkContext.wholeTextFiles("hdfs://nameservice1/user/me/test.txt")
json = rdd.collect()[0][1]
I have a very large .json file on disk. I want to instantiate this as a Java object using the Jackson parser.
The file looks like this:
[ { "prop1": "some_value",
"prop2": "some_other_value",
"something_random": [
// ... arbitrary list of objects containing key/value
// pairs of differing amounts and types ...
]
},
// ... repated many times ...
{
}
]
Basically it's a big array of objects and each object has two string properties that identify it, then another inner array of objects where each object is a random collection of properties and values which are mostly strings and ints, but may contain arrays as well.
Due to this object layout, there isn't a set schema I can use to easily instantiate these objects. Using the org.json processor requires attempting to allocate a string for the entire file, which often fails due to its size. So I'd like to use the streaming parser, but I am completely unfamiliar with it.
What I want in the end is a Map where the String is the value of prop1 and SomeObject is something that holds the data for the whole object (top-level array entry). Perhaps just the JSON which can then be parsed later on when it is needed?
Anyway, ideas on how to go about writing the code for this are welcome.
Since you do not want to bind the whole thing as single object, you probably want to use readValues() method of ObjectReader. And if structure of individual values is kind of generic, you may want to bind them either as java.util.Maps or JsonNodes (Jackson's tree model). So you would do something like:
ObjectMapper mapper = new ObjectMapper();
ObjectReader reader = mapper.reader(Map.class); // or JsonNode.class
MappingIterator<Map> it = reader.readValues(new File("stuff.json"));
while (it.hasNextValue()) {
Map m = it.nextValue();
// do something; like determine real type to use and:
OtherType value = mapper.convertValue(OtherType.class);
}
to iterate over the whole thing.