Including wavefront obj models into each other - wavefront

Is there a way to include several wavefront obj files into one?
Friend of mine told me, there is definitely some include keyword in obj, which allows me to create obj files like:
#room.obj:
include chair.obj
include table.obj
...
v 26.7903 8.1230 26.4282
v 26.3940 8.8766 26.1557
....
but I can't find such command in documentation.
I want to be able to create such files for combining different others obj files. I do not want to merge two obj files via some 3d editor (which bake geometry together into one).
Is there such command?

Unfortunately there is no such command. VRML97 support this via inlune command, but not OBJ

Related

Azure Datafactory process and filter files to process

I have a pipeline that processes some files, and in some cases "groups" of files. Meaning the files should be processed together and are correlated with a timestamp.
Ex.
Timestamp#Customer.csv
Timestamp#Customer_Offices.csv
Timestamp_1#Customer.csv
Timestamp_1#Customer_Offices.csv
...
I have a table with all the scopes, and files with respective filemask. I have populated a variable in the beginning of the pipeline based on a parameter
The Get files activity goes to a sFTP location and grab files from a folder. Then I only want to process the "Customer.csv" and ".Customer_Offices.csv" files. This is because the folder location has more file types or scopes to be processed by other pipelines. If I don't filter, the next activities end up by processing metadata of files that are not supposed to. In terms of efficiency and performance s bad, and is even causing some issues with files being left behind.
I've tried something like
#variables('FilesToSearch').contains(#endswith(item().name, 'do I need this 2nd parm in arrays ?'))
but no luck... :(
Any help will be highly appreciated,
Best regards,
Manuel
contains function can direct for a string to find a substring, so you can try something like this expression #contains(item().name,'Customer')
and no need to create a variable.
Or use endsWith function and use this expression:
#or(endswith(item().name,'Customer.csv'),endswith(item().name,'Customer_Offices.csv'))

Psychopy: how to avoid to store variables in the csv file?

When I run my PsychoPy experiment, PsychoPy saves a CSV file that contains my trials and the values of my variables.
Among these, there are some variables I would like to NOT be included. There are some variables which I decided to include in the CSV, but many others which automatically felt in it.
is there a way to manually force (from the code block) the exclusion of some variables in the CSV?
is there a way to decide the order of the saved columns/variables in the CSV?
It is not really important and I know I could just create myself an output file without using the one of PsychoPy, or I can easily clean it afterwards but I was just curious.
PsychoPy spits out all the variables it thinks you could need. If you want to drop some of them, that is a task for the analysis stage, and is easily done in any processing pipeline. Unless you are analysing data in a spreadsheet (which you really shouldn't), the number of columns in the output file shouldn't really be an issue. The philosophy is that you shouldn't back yourself into a corner by discarding data at the recording stage - what about the reviewer who asks about the influence of a variable that you didn't think was important?
If you are using the Builder interface, the saving of onset & offset times for each component is optional, and is controlled in the "data" tab of each component dialog.
The order of variables is also not under direct control of the user, but again, can be easily manipulated at the analysis stage.
As you note, you can of course write code to save custom output files of your own design.
there is a special block called session_variable_order: [var1, var2, var3] in experiment_config.yaml file, which you probably should be using; also, you should consider these methods:
from psychopy import data
data.ExperimentHandler.saveAsWideText(fileName = 'exp_handler.csv', delim='\t', sortColumns = False, encoding = 'utf-8')
data.TrialHandler.saveAsText(fileName = 'trial_handler.txt', delim=',', encoding = 'utf-8', dataOut = ('n', 'all_mean', 'all_raw'), summarised = False)
notice the sortColumns and dataOut params

Copying fits-file data and/or header into a new fits-file

Similar question was asked before, but asked in an ambigous way and using a different code.
My problem: I want to make an exact copy of a .fits-file header into a new file. (I need to process a fits file in way, that I change the data, keep the header the same and save the result in a new file). Here a short example code, just demonstrating the tools I use and the discrepancy I arrive at:
data_old, header_old = fits.getdata("input_file.fits", header=True)
fits.writeto('output_file.fits', data_old, header_old, overwrite=True)
I would expect now that the the files are exact copies (headers and data of both being same). But if I check for difference, e.g. in this way -
fits.printdiff("input_file.fits", "output_file.fits")
I see that the two files are not exact copies of each other. The report says:
...
Files contain different numbers of HDUs:
a: 3
b: 2
Primary HDU:
Headers contain differences:
Headers have different number of cards:
a: 54
b: 4
...
Extension HDU 1:
Headers contain differences:
Keyword GCOUNT has different comments:
...
Why is there no exact copy? How can I do an exact copy of a header (and/or the data)? Is a key forgotten? Is there an alternative simple way of copy-pasting a fits-file-header?
If you just want to update the data array in an existing file while preserving the rest of the structure, have you tried the update function?
The only issue with that is it doesn't appear to have an option to write to a new file rather than update the existing file (maybe it should have this option). However, you can still use it by first copying the existing file, and then updating the copy.
Alternatively, you can do things more directly using the object-oriented API. Something like:
with fits.open(filename) as hdu_list:
hdu = hdu_list[<name or index of the HDU to update>]
hdu.data = <new ndarray>
# or hdu.data[<some index>] = <some value> i.e. just directly modify the existing array
hdu.writeto('updated.fits') # to write just that HDU to a new file, or
# hdu_list.writeto('updated.fits') # to write all HDUs, including the updated one, to a new file
There's nothing not "pythonic" about this :)

How can I access the information associated to an object from a Mercurial plugin?

I am trying to write a small Mercurial extension, which, given the path to an object stored within the repository, it will tell you the revision it's at. So far, I'm working on the code from the WritingExtensions article, and I have something like this:
cmdtable = {
# cmd name function call
"whichrev": (whichrev,[],"hg whichrev FILE")
}
and the whichrev function has almost no code:
def whichrev(ui, repo, node, **opts):
# node will be the file chosen at the command line
pass
So , for example:
hg whichrev text_file.txt
Will call the whichrev function with node being set to text_file.txt. With the use of the debugger, I found that I can access a filelog object, by using this:
repo.file("text_file.txt")
But I don't know what I should access in order to get to the sha1 of the file.I have a feeling I may not be working with the right function.
Given a path to a tracked file ( the file may or may not appear as modified under hg status ), how can I get it's sha1 from my extension?
A filelog object is pretty low level, you probably want a filectx:
A filecontext object makes access to data related to a particular filerevision convenient.
You can get one through a changectx:
ctx = repo['.']
fooctx = ctx['foo']
print fooctx.filenode()
Or directly through the repo:
fooctx = repo.filectx('foo', '.')
Pass None instead of . to get the working copy ones.

Construct an Iterator

Let's say you want to construct an Iterator that spits out File objects. What type of data do you usually provide to the constructor of such an Iterator?
an array of pre-constructed File objects, or
simply raw data (multidimensional array for instance), and let the Iterator create File objects on the fly when Iterated through?
Edit:
Although my question was actually ment to be as general a possible, it seems my example is a bit to broad to tackle general, so I'll elaborate a bit more. The File objects I'm talking about are actually file references from a database. See these two tables:
folder
| id | folderId | name |
------------------------------------
| 1 | null | downloads |
file
| id | folderId | name |
------------------------------------
| 1 | 1 | instructions.pdf |
They reference actual folders and files on a filesystem.
Now, I created a FileManager object. This will be able to return a listing of folders and files. For instance:
FileManager::listFiles( Folder $folder );
... would return an Iterator of File objects (or, come to think of it, rather FileReference objects) from the database.
So what my question boils down to is:
If the FileManager object constructs the Iterator in listFiles() would you do something like this (pseudo code):
listFiles( Folder $folder )
{
// let's assume the following returns an multidimensional array of rows
$filesData = $db->fetch( $sqlForFetchingFilesFromFolder );
// let the Iterator take care of constructing the FileReference objects with each iteration
return FileIterator( $filesData );
}
or (pseudo code):
listFiles( Folder $folder )
{
// let's assume the following returns an multidimensional array of rows
$filesData = $db->fetch( $sqlForFetchingFilesFromFolder );
$files = array();
for each( $filesData as $fileData )
{
$files.push ( new FileReference( $fileData ) );
}
// provide the Iterator with precomposed FileReference objects
return FileIterator( $files );
}
Hope this clarifies things a bit.
What is your "File" object meant to be? An open handle to a file, or a representation of a file system path which can be opened in turn?
It would generally be a bad idea to open all the files at once - after all, part of the point of using an iterator is that you only access one object at a time. Your iterator could yield one open file at a time, and let the caller take responsibility for closing it, although again that might be slightly odd to use.
Your requirements aren't clear, to be honest - in my experience, most iterators which yield a series of files use something like Directory.GetFiles(pattern) - you don't pass them the raw data at all, you pass them something which they can use to find the data for you.
It's not obvious what you're trying to get at - it feels like you're trying to ask a general question, but you haven't provided enough information to let us advise you. It's like asking, "Do I want to use a string or an integer?" without giving any context.
EDIT: I would probably push all of that logic into FileIterator, personally. Otherwise it's hard to see what value it's really providing. In a language like C# or Python you wouldn't need a separate class in the first place - you'd just use a generator of some description. In that sense this question isn't language agnostic :(
What exactly is your iterator supposed to do? Write data to files? Create them?
An iterator is a pattern for iterating through data, which means providing sequential data in a uniformous way, not mutating them.
I find the question to be unclear.
Are we talking Iterator or Factory?
To me an Iterator is operating on a pre-existing collection of things and allows the caller to work on each thing in turn.
When you say "Spits Out" do you mean allows the client to work with one file from a pre-existing set of files or do you mean that you are iterating some data and intend to store that data in files you are generting. If we are geneating, then we've got a File factory.
My guess is that you are intending to process some files in a file sytstem. I think that your Iterator is akin to a Directory, it can give you the next file it knows about. So I construct the "Driectory" by passing enough data to allow it to know which files you mean (could be just an OS path, could be some kind of "find" enxpression, a list of ftp-like references, etc.) and expect it to give me the next File as I iterate.
----updated following question clarification
I think that the key question here is when the individual files should be opened. The Iterator itself will reasonably return a File object corresponding to an open file handle, the caller can then just work with the file. But iternally should the iterator be working against a list of pre-opened files or a list of file references, the files being opened as the iterator next() is used.
I think we should do the latter, because there is overhead in having an open file, hence we should open the files only when we need them.
That leads to one other point: who closes the file? We can't afford to keep them all open. Perhaps the iterator should close each file as next() is called. This implies that that the iterator itself needs a close() method to allow tidy up of the currently open file. Alterntaivelywe need to explictily document that closing is the client's responsibility.