There is a plethora of BUILD files scattered throughout the hierarchy of my mono repo.
Some of these files contain cc_binary rules.
I know they are all built into bazel-bin, but I'd like to get easy access to them all.
How can I package them all up, and put them all into ~/.bin/ for example?
I see the packaging rules, but its not clear to me how to write a rule that captures every single program and packages them together.
It may not be the most elegant solution (plus I hope I got the question), but this is how we do it by packaging/"tarring" each binary in its own bazel package / BUILD file:
cc_binary(
name = "hello"
...
)
load("#bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
pkg_tar(
name = "hello_pkg",
srcs = [":hello"],
mode = "0755",
package_dir = "/usr/bin",
)
And then we'd collect all those into a one overall tarball/package in project root:
pkg_tar(
name = "mypkg",
extension = "tar.gz",
deps = [
"//hello:hello_pkg",
...
],
)
Sometimes we'd actually have multiple such rules for hello to collect for instance executables under bin and libraries in lib with intermediary hello_bin and hello_lib targets. Which would in the same fashion as mypkg above be first aggregated into hello_pkg and that in turn would be used in mypkg.
Related
When I run my PsychoPy experiment, PsychoPy saves a CSV file that contains my trials and the values of my variables.
Among these, there are some variables I would like to NOT be included. There are some variables which I decided to include in the CSV, but many others which automatically felt in it.
is there a way to manually force (from the code block) the exclusion of some variables in the CSV?
is there a way to decide the order of the saved columns/variables in the CSV?
It is not really important and I know I could just create myself an output file without using the one of PsychoPy, or I can easily clean it afterwards but I was just curious.
PsychoPy spits out all the variables it thinks you could need. If you want to drop some of them, that is a task for the analysis stage, and is easily done in any processing pipeline. Unless you are analysing data in a spreadsheet (which you really shouldn't), the number of columns in the output file shouldn't really be an issue. The philosophy is that you shouldn't back yourself into a corner by discarding data at the recording stage - what about the reviewer who asks about the influence of a variable that you didn't think was important?
If you are using the Builder interface, the saving of onset & offset times for each component is optional, and is controlled in the "data" tab of each component dialog.
The order of variables is also not under direct control of the user, but again, can be easily manipulated at the analysis stage.
As you note, you can of course write code to save custom output files of your own design.
there is a special block called session_variable_order: [var1, var2, var3] in experiment_config.yaml file, which you probably should be using; also, you should consider these methods:
from psychopy import data
data.ExperimentHandler.saveAsWideText(fileName = 'exp_handler.csv', delim='\t', sortColumns = False, encoding = 'utf-8')
data.TrialHandler.saveAsText(fileName = 'trial_handler.txt', delim=',', encoding = 'utf-8', dataOut = ('n', 'all_mean', 'all_raw'), summarised = False)
notice the sortColumns and dataOut params
I have been looking for information in Google and stackoverflow, but I dind't find a good solution.
I need to handle a list, add elements, delete elements... but saved in a file. This is in order to avoid losing the list when the execution finish, because I need to execute my python script periodically. Here are alternatives I found, but they have some problems
Shelve module: can't find how to delete a element in the list (such as list.pop() ) instead of deleting all the list.
pprint.pformat() : to modify information, I need to delete all the document and save the modifed information, very inefficient.
json: tediuos for just a list and doesn't seems to solve my problem
So, which is the best way to handle a list, doing things as easy as mylist.pop() keeping the changes in a file in an efficient way?
Since this has never been answered before, here is an efficient way. The package pysos can handle disk backed lists with inserts/deletes in constant time.
pip install pysos
Code example:
import pysos
db = pysos.List('somefile')
db.append('saved in the file 0')
db.append('saved in the file 1')
db.append('saved in the file 2')
print(db[1]) # => 'saved in the file 1'
del db[1]
print(db[1]) # => 'saved in the file 2'
I have a use case that make use of <w:altChunk/> element in Word document by inject (fragment of) HTML file as alternate chunks and let Word do it works when the file gets opened. The current implementation was using XML/XSL to compose WordML XML, modify relationships, and do all packaging stuffs manually which is a real pain.
I wanted to move to python-docx but the API doesn't support this directly. Currently I found a way to add the <w:altChunk/> in the document XML. But still struggle to find a way to add relationship and related file to the package.
I think I should make a compatible part and pass it to document.part.relate_to function to do its job. But still can't figure how to:
from docx import Document
from docx.oxml import OxmlElement, qn
from docx.opc.constants import RELATIONSHIP_TYPE as RT
def add_alt_chunk(doc: Document, chunk_part):
''' TODO: figuring how to add files and relationships'''
r_id = doc.part.relate_to(chunk_part, RT.A_F_CHUNK)
alt = OxmlElement('w:altChunk')
alt.set(qn('r:id'), r_id)
doc.element.body.sectPr.addprevious(alt)
Update:
As per scanny's advice, below is my working code. Thank you very much Steve!
from docx import Document
from docx.oxml import OxmlElement
from docx.oxml.ns import qn
from docx.opc.part import Part
from docx.opc.constants import RELATIONSHIP_TYPE as RT
def add_alt_chunk(doc: Document, html: str):
package = doc.part.package
partname = package.next_partname('/word/altChunk%d.html')
alt_part = Part(partname, 'text/html', html.encode(), package)
r_id = doc.part.relate_to(alt_part, RT.A_F_CHUNK)
alt_chunk = OxmlElement('w:altChunk')
alt_chunk.set(qn('r:id'), r_id)
doc.element.body.sectPr.addprevious(alt_chunk)
doc = Document()
doc.add_paragraph('Hello')
add_alt_chunk(doc, "<body><strong>I'm an altChunk</strong></body>")
doc.add_paragraph('Have a nice day!')
doc.save('test.docx')
Note: the altChunk parts only work/appear when document is open using MS Word
Well, some hints here anyway. Maybe you can post your working code at the end as a full "answer":
The alt-chunk part needs to start its life as a docx.opc.part.Part object.
The blob argument should be the bytes of the file, which is often but not always plain text. It must be bytes though, not unicode (characters), so any encoding has to happen before calling Part().
I expect you can work out the other arguments:
package is the overall OPC package, available on document.part.package.
You can use docx.opc.package.OpcPackage.next_partname() to get an available partname based on a root template like: "altChunk%s" for a name like "altChunk3". Check what partname prefix Word uses for these, possibly with unzip -l has-an-alt-chunk.docx; should be easy to spot.
The content-type is one in docx.opc.constants.CONTENT_TYPE. Check the [Content_Types].xml part in a .docx file that has an altChunk to see what they use.
Once formed, the document_part.relate_to() method will create the proper relationship. If there is more than one relationship (not common) then you need to create each one separately. There would only be one relationship from a particular part, just some parts are related to more than one other part. Check the relationships in an existing .docx to see, but pretty good guess it's only the one in this case.
So your code would look something like:
package = document.part.package
partname = package.next_partname("altChunkySomethingPrefix")
content_type = docx.opc.constants.CONTENT_TYPE.THE_RIGHT_MIME_TYPE
blob = make_the_altChunk_file_bytes()
alt_chunk_part = Part(partname, content_type, blob, package)
rId = document.part.relate_to(alt_chunk_part, RT.A_F_CHUNK)
etc.
I've got a executable target called Foobar, a static library holding some common code called FoobarCommon, and a test target specifically for the common code called FoobarCommonSpecs.
Unsurprisingly, I have made both Foobar and FoobarCommonSpecs depend on the FoobarCommon library.
The Podfile looks something like the below:
target 'FoobarCommon' do
pod 'ReactiveCocoa'
...
end
target 'Foobar' do # links against to FoobarCommon in Xcode
...
end
target 'FoobarCommonSpecs' do # links against to FoobarCommon in Xcode
pod 'LLReactiveMatchers', :git => 'https://github.com/lawrencelomax/LLReactiveMatchers.git'
end
LLReactiveMatchers is a Pod that depends on ReactiveCocoa.
Note that in this situation, ReactiveCocoa is prsent in both FoobarCommon and also in FoobarCommonSpecs
The Problem
Whenever I run FoobarCommonSpecs, I get many duplicate symbol errors for ReactiveCocoa.
I want to say to Cocoapods that it should just IGNORE LLReactiveMatcher's dependency on ReactiveCocoa. It should just let Xcode do its job and it should link with the copy of ReactiveCocoa found in FoobarCommon. How do I do that?
Does the link_with directive have anything to do with anything?
I am trying to write a small Mercurial extension, which, given the path to an object stored within the repository, it will tell you the revision it's at. So far, I'm working on the code from the WritingExtensions article, and I have something like this:
cmdtable = {
# cmd name function call
"whichrev": (whichrev,[],"hg whichrev FILE")
}
and the whichrev function has almost no code:
def whichrev(ui, repo, node, **opts):
# node will be the file chosen at the command line
pass
So , for example:
hg whichrev text_file.txt
Will call the whichrev function with node being set to text_file.txt. With the use of the debugger, I found that I can access a filelog object, by using this:
repo.file("text_file.txt")
But I don't know what I should access in order to get to the sha1 of the file.I have a feeling I may not be working with the right function.
Given a path to a tracked file ( the file may or may not appear as modified under hg status ), how can I get it's sha1 from my extension?
A filelog object is pretty low level, you probably want a filectx:
A filecontext object makes access to data related to a particular filerevision convenient.
You can get one through a changectx:
ctx = repo['.']
fooctx = ctx['foo']
print fooctx.filenode()
Or directly through the repo:
fooctx = repo.filectx('foo', '.')
Pass None instead of . to get the working copy ones.