Importing *.dae file with multiple texture to papervision 3D - actionscript-3

usually when you export a 3D object with *.dae format there's a folder that comes with the file, the folder contains the texture of the object, does anybody know how to add a *.dae file and its texture to our project ?

You should place textures into folder with *.dae and load your object and textures like this:
var bm:BitmapFileMaterial = new BitmapFileMaterial('PATH_TO_TEXTURE', true);
var mat:MaterialsList = new MaterialsList();
mat.addMaterial(bm2, 'MATERIAL_NAME');
mat.addMaterial(bm3, 'ANOTHER_MATERIAL_NAME');
var obj:DAE = new DAE();
obj.useOwnContainer = true;
obj.load('PATH_TO_DAE', mat);
Also, materials should be correctly linked in *.dae. Something like this:
...
<library_images>
<image id="TEXTURE_NAME-image" name="TEXTURE_NAME">
<init_from>2/TEXTURE_NAME.png</init_from>
</image>
</library_images>
<library_materials>
<material id="TEXTURE_NAME" name="TEXTURE_NAME">
<instance_effect url="#TEXTURE_NAME-fx"/>
</material>
</library_materials>
...
<library_visual_scenes>
<visual_scene id="RootNode" name="RootNode">
<node id="TEXTURE_NAME_tp3_Mesh01" name="TEXTURE_NAME_tp3_Mesh01">
<matrix sid="matrix">1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000</matrix>
<instance_geometry url="#TEXTURE_NAME_tp3_Mesh01-lib">
<bind_material>
<technique_common>
<instance_material symbol="MATERIAL_NAME" target="#MATERIAL_NAME"/>
</technique_common>
</bind_material>
</instance_geometry>
</node>
</visual_scene>
</library_visual_scenes>
...

Related

How to understand the base64 encoding in vtk binary file format

I have a problem with how to understand the binary DataArray. The problem because of the base64 encoding.
The manual said if the format of DataArray is binary,
The data are encoded in base64 and listed contiguously inside the
DataArray element. Data may also be compressed before encoding in base64. The byte-
order of the data matches that specified by the byte_order attribute of the VTKFile element.
I can not fully understand that, so I have obtain ascii file and binary file for same model.
ASCII file
<?xml version="1.0"?>
<VTKFile type="UnstructuredGrid" version="0.1" byte_order="LittleEndian" header_type="UInt32" compressor="vtkZLibDataCompressor">
<UnstructuredGrid>
<Piece NumberOfPoints="4" NumberOfCells="1">
<PointData>
</PointData>
<CellData>
</CellData>
<Points>
<DataArray type="Float32" Name="Points" NumberOfComponents="3" format="ascii" RangeMin="0" RangeMax="1.4142135624">
0 0 0 1 0 0
1 1 0 0 1 1
</DataArray>
</Points>
<Cells>
<DataArray type="Int64" Name="connectivity" format="ascii" RangeMin="0" RangeMax="3">
0 1 2 3
</DataArray>
<DataArray type="Int64" Name="offsets" format="ascii" RangeMin="4" RangeMax="4">
4
</DataArray>
<DataArray type="UInt8" Name="types" format="ascii" RangeMin="10" RangeMax="10">
10
</DataArray>
</Cells>
</Piece>
</UnstructuredGrid>
</VTKFile>
Binary file
<?xml version="1.0"?>
<VTKFile type="UnstructuredGrid" version="0.1" byte_order="LittleEndian" header_type="UInt32" compressor="vtkZLibDataCompressor">
<UnstructuredGrid>
<Piece NumberOfPoints="4" NumberOfCells="1">
<PointData>
</PointData>
<CellData>
</CellData>
<Points>
<DataArray type="Float32" Name="Points" NumberOfComponents="3" format="binary" RangeMin="0" RangeMax="1.4142135624">
AQAAAACAAAAwAAAAEQAAAA==eJxjYEAGDfaobEw+ADwjA7w=
</DataArray>
</Points>
<Cells>
<DataArray type="Int64" Name="connectivity" format="binary" RangeMin="0" RangeMax="3">
AQAAAACAAAAgAAAAEwAAAA==eJxjYIAARijNBKWZoTQAAHAABw==
</DataArray>
<DataArray type="Int64" Name="offsets" format="binary" RangeMin="4" RangeMax="4">
AQAAAACAAAAIAAAACwAAAA==eJxjYYAAAAAoAAU=
</DataArray>
<DataArray type="UInt8" Name="types" format="binary" RangeMin="10" RangeMax="10">
AQAAAACAAAABAAAACQAAAA==eJzjAgAACwAL
</DataArray>
</Cells>
</Piece>
</UnstructuredGrid>
</VTKFile>
When I looked at the the DataArray, using the last one as an example, I can not create the relationship between AQAAAACAAAABAAAACQAAAA==eJzjAgAACwAL and 10.
My understanding can be expressed using follow code, but it obtain CggAAA==.
#include "base64.h" // https://github.com/superwills/NibbleAndAHalf/blob/master/NibbleAndAHalf/base64.h
#include <iostream>
int main()
{
int x = 10;
int len;
// first arg: binary buffer
// second arg: length of binary buffer
// third arg: length of ascii buffer
char *ascii = base64((char *)&x, sizeof(int), &len);
std::cout << ascii << std::endl;
std::cout << len << std::endl;
free(ascii);
return 0;
}
Can someone give me an explanation of how to convert?
Another relate topic can be see in
https://discourse.vtk.org/t/error-when-writing-binary-vtk-files/4487/7
Thanks for your time.
Solution can be found in disscusstion.
https://discourse.vtk.org/t/how-to-understand-binary-dataarray-in-xml-vtk-output/4489
The long extra data comes from the compressor header.
I have found the solution and written the answer in a VTK support question, but I am writing it here in case anyone comes here looking for the same issue as us two.
Note that I program in Python, but I believe there are base64 and zlib functions in C++. Also, I use numpy to define arrays, but I believe std::vector can be equivalently used in C++.
So, suppose we want to write the single precision float32 array called "Points" in your example. If we suppose that a header type of "UInt32" is used, then in Python, we would do:
import numpy as np
import zlib
import base64
# write the float array.
arr = np.array([0, 0, 0, 1, 0, 0,
1, 1, 0, 0, 1, 1], dtype='float32')
# generate a zlib compressed array. This outputs a python byte type
arr_comp = zlib.compress(arr)
# generate the uncompressed header
header = np.array([ 1, # apparently this is always the case, I think
2**15, # from what I have read, this is true in general
arr.nbytes, # the size of the array `arr` in bytes
len(arr_comp)], # the size of the compressed array
dtype='uint32') # because of header_type="UInt32"
# use base64 encoding when writing to file
# `.decode("utf-8")` transforms the python byte type to a string
print((base64.b64encode(header_arr) + base64.b64encode(arr_comp)).decode("utf-8"))
The output is as expected:
AQAAAACAAAAwAAAAEQAAAA==eJxjYEAGDfaobEw+ADwjA7w=
The 2**15 is the argument that controls the size of the history buffer (or the “window size”) used when compressing data, according to the zlib python docs. Not sure what that means though...
Edit: The above code only works if the size in bytes of the array is less than or equal to 2**15. In the VTK support question I have expanded for the case where the array is larger. You have to divide it into chunks.

Automating a process for multiple CSV file

I've been looking around and couldn't find the answer so here it is.
I'm trying to look into a way for automating of changing the content of a CSV file into something else for machine learning purposes. I have the content of a single line like this:
0, 0, 0, -2.3145, 5.567...... 65, 65, 125, 70.
(516 columns)
And trying to change it to this:
0,
0,
-2.3145,
5.567
....
65,
65,
125,
70.
(516 rows)
So basically transposing the data from horizontal to vertical (single row to single column).
It's easily done using Excel but problem is I have 4000+ of the CSV file so it takes a lot of time.
On top of that, I have to get the first 512 rows and store it into a CSV of another folder adding the last 4 rows into another CSV of another folder while both files have the same name.
Eg:
features(folder)
1.CSV
2.CSV
.....
4000+.CSV
labels(folder)
1.CSV
2.CSV
.....
4000+.CSV
Any suggestions on how I can speed things up? Tried writing my own program but I'm stumped on changing it from row to column. I've only managed to split the single CSV file to it's 4000+ pieces.
EDIT:
I've tested by putting the csv rows into an array and then storing the array into the csv where the code looks like this:
with open('FFTMIM16_512L1H1S0D0_1194.csv', 'r') as f:
reader = csv.reader(f)
your_list = list(reader)
print(your_list[0:512])
print(your_list[512:516])
print(your_list)
with open('test.csv', 'w', newline = '') as fa:
writer = csv.writer(fa)
writer.writerows(your_list[0:511])
with open('test1.csv', 'w', newline = '') as fb:
writer = csv.writer(fb)
writer.writerows(your_list[512:516])
It works but I just need to run it in a loop. A problem that I don't understand is that if I save the values from 0 to 512 on test.csv, it will show 512 counts of rows but when I store from 513 to 516 to test1.csv, it only shows three instead of four rows that I need. Changing fb content from 512 to 516 will work which doesn't make sense to me because the value of 512 in test.csv is 0 while test1.csv is 69. Why is that? From what I can understand is the index of the array, it starts from 0 to the place of number I need. Or is it not the case in python?
EDIT 2:
My new code is as follows:
import csv
import os
import glob
#import itertools
directory = input("INPUT FOLDER: ")
output1 = input("FEATURES FODLER: ")
output2 = input("LABELS FOLDER: ")
in_files = os.path.join(directory, '*.csv')
for in_file in glob.glob(in_files):
with open(in_file) as input_file:
reader = csv.reader(input_file)
your_list = (reader)
filename = os.path.splitext(os.path.basename(in_file))[0] + '.csv'
with open(os.path.join(output1, filename), 'w', newline='') as output_file1:
writer = csv.writer(output_file1)
writer.writerow(your_list[0:512])
with open(os.path.join(output2, filename), 'w', newline='' ) as output_file2:
writer = csv.writer(output_file2)
writer.writerow(your_list[512:516])
It shows the output as I wanted but now it stores apostrophes and braces eg. ['0.0'], ['2.321223'] as well. How do I remove these?
I don't understand why you can't do it programatically if you have your 4000+ pieces, just write every piece in a new line?
In my opinion the easiest way, but not automatically, would be some editor like Notepad ++.
Here you can Replace "," by "\r\n" or if you want to keep the "," you replace it with ",\r\n".
If you want it automated i don't see a not programmatical way.
By the way... if you use python with numpy/scipy you can just use the .transpose() function
*Edit to your comment:
what do you mean with "split from the first to the 512"? If you want parts with the size 512 it would be something like:
new_array = []
temp_array = []
k = 0
for num in your_array:
temp_array.append(num)
k += 1
if k % 512 == 0:
new_array.append(temp_array)
k = 0
temp_array = []
#to append the last block which might not be 512 sized
if len(temp_array) > 0:
new_array.append(temp_array)
# Save Arrays
for i in len(new_array):
saveToCsv(array = new_array[i], name="csv_"+str(i))
Your new_array would now be an array filled with 512 sized arrays.
Might be mistakes here, i did not test the code. To save you only need a function saveToCsf(array, name) which saves an array into a file.

R pheatmap row annotation and title font size questions

I have been trying to add row annotation in my heatmap created by pheatmap in R. Basically I have a csv file with one particular column (Group) to be used for row annotation in heatmap. However, I'm having trouble with my following codes. The other two issues are: the font size of the title is apparently too big but I could not find a way to decrease it. And I wanted to set the values of any zero to color purewhite but I am not sure it is really white in my output file. The input csv file and output pdf files are linked. I am sticking with pheatmap here since I found that it creates the heatmap that fits my need better than other heatmap functions. Suggestions are appreciated.
> library("pheatmap")
> data <- read.csv("/Users/neo/Test_BP_052215.csv", header = TRUE, row.names = 2, stringsAsFactors=F)
> head(data)
Group WT KO1 KO2
GO:0018904 organic ether metabolic process Metabolism 12.17372951 0.000000 -15.006995
GO:0006641 triglyceride metabolic process Metabolism 5.200847907 0.000000 0.000000
GO:0045444 fat cell differentiation Metabolism 6.374521098 0.000000 -7.927192
GO:0006639 acylglycerol metabolic process Metabolism 6.028616852 0.000000 0.000000
GO:0016125 sterol metabolic process Metabolism 5.760678325 8.262778 0.000000
GO:0016126 sterol biosynthetic process Metabolism -6.237114754 9.622373 0.000000
> heatdata <- data[,-1]
> head(heatdata)
WT KO1 KO2
GO:0018904 organic ether metabolic process 12.17372951 0.000000 -15.006995
GO:0006641 triglyceride metabolic process 5.200847907 0.000000 0.000000
GO:0045444 fat cell differentiation 6.374521098 0.000000 -7.927192
GO:0006639 acylglycerol metabolic process 6.028616852 0.000000 0.000000
GO:0016125 sterol metabolic process 5.760678325 8.262778 0.000000
GO:0016126 sterol biosynthetic process -6.237114754 9.622373 0.000000
> annotation_row <- data.frame(Group = data[,1])
> rownames(annotation_row) = paste("Group", 1:38, sep = "")
> ann_colors = list( Group = c(Metabolism="navy", Cellular="skyblue", Signal="steelblue", Transport="green", Cell="purple", Protein="yellow", Other="firebrick") )
> head(annotation_row)
Group
Group1 Metabolism
Group2 Metabolism
Group3 Metabolism
Group4 Metabolism
Group5 Metabolism
Group6 Metabolism
> col_breaks = unique(c(seq(-16,-0.5,length=200), seq(-0.5,0.5,length=200), seq(0.5,20,length=200)))
> my_palette <- colorRampPalette(c("blue", "white", "red"))(n = 599)
> pheatmap(heatdata, main="Enrichment", color=my_palette, breaks=col_breaks, border_color = "grey20", cellwidth = 15, cellheight = 12, scale = "none", annotation_row = annotation_row, annotation_colors = ann_colors, cluster_rows = F, cluster_cols=F, fontsize_row=10, filename="heatmap_BP_test.pdf")

How can I use a compressed connection between Django and MySQL?

I have compression on my MySQL server, and I'd like to ensure Django is making compressed connections. How can I do this?
Trial, error and inference suggest the solution is to use a compress field set to True in the OPTIONS dict:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
...
'OPTIONS': {
'compress': True
}
}
}
I can't confirm the connection is actually compressed though.
A cursory (no pun intended) examinations of /django/db/backends/mysql/base.py of Django 1.3:
298 def _cursor(self):
299 if not self._valid_connection():
300 kwargs = {
301 'conv': django_conversions,
302 'charset': 'utf8',
303 'use_unicode': True,
304 }
305 settings_dict = self.settings_dict
306 if settings_dict['USER']:
307 kwargs['user'] = settings_dict['USER']
308 if settings_dict['NAME']:
309 kwargs['db'] = settings_dict['NAME']
310 if settings_dict['PASSWORD']:
311 kwargs['passwd'] = settings_dict['PASSWORD']
312 if settings_dict['HOST'].startswith('/'):
313 kwargs['unix_socket'] = settings_dict['HOST']
314 elif settings_dict['HOST']:
315 kwargs['host'] = settings_dict['HOST']
316 if settings_dict['PORT']:
317 kwargs['port'] = int(settings_dict['PORT'])
318 # We need the number of potentially affected rows after an
319 # "UPDATE", not the number of changed rows.
320 kwargs['client_flag'] = CLIENT.FOUND_ROWS
321 kwargs.update(settings_dict['OPTIONS'])
322 self.connection = Database.connect(**kwargs)
323 self.connection.encoders[SafeUnicode] = self.connection.encoders[unicode]
324 self.connection.encoders[SafeString] = self.connection.encoders[str]
325 connection_created.send(sender=self.__class__, connection=self)
326 cursor = CursorWrapper(self.connection.cursor())
327 return cursor
When creating a connection on line 322, the code does not seem to pass the compress argument in kwargs, not by default anyway.
Passing 'compress': True through OPTIONS should let you create a compressed connection when it's available, this dictionary is merged to kwargs on line 321.
There does not seem to be any other calls to the MySQLdb.connect() method in the rest of the backend. Note that MySQLdb is imported as: import MySQLdb as Database in that file.

How does multi-texture OBJ->JSON converted files keeps track of face-texture mapping?

I'm trying to manually (no libs such as Three.js) load a JSON 3D model into my webGL code just for fun but I'm having a hard time when my models have more than 1 texture.
In a OBJ->JSON converted file, how do I know which texture is the "active" for the faces that follow? OBJ files use 'usemtl' tag to identify the texture/material in use but I can't seem to find that kind of pointer when working with JSONs.
In time, I'm using the OBJ->JSON converter written by alteredq
Thanks a bunch,
Rod
Take a look at this file: three.js / src / extras / loaders / JSONLoader.js.
The first element of each face in the faces array of the JSON file is a bit field. The first bit says if that face have three o four indices. And the second bit says if that face has a material assigned. Material index, if any, appears after indices.
Example: faces: [2, 46, 44, 42, 0, 1, 45, 46, 48, 3, ...
First face (triangle with material):
Type: 2 (00000010b)
Indices: 46, 44, 42
Material index: 0
Second face (quad without material):
Type: 1 (00000001b)
Indices: 45, 46, 48
Third face (quad with material):
Type: 3 (00000011b)
Indices: ...
Check source code for full meaning of that bit field.
In the OBJ->JSON converter I have written for the KickJS game engine, each material has its own range of indices.
This means a simple OBJ model such as
mtllib plane.mtl
o Plane
v 1.000000 0.000000 -1.000000
v 1.000000 0.000000 1.000000
v -1.000000 0.000000 1.000000
v -1.000000 0.000000 -1.000000
usemtl Material
s 1
f 2 3 4
usemtl Material.001
f 1 2 4
Would be translated into this (With two indices; one for each material):
[
{
"vertex": [1,0,1,-1,0,1,-1,0,-1,1,0,-1],
"name": "Plane mesh",
"normal": [0,-1,0,0,-1,0,0,-1,0,0,0,0],
"indices0": [0,1,2],
"indices1": [3,0,2]
}
]
Use the online model viewer for the convertion:
http://www.kickjs.org/example/model_viewer/model_viewer.html