Configure multiple switch values in Nuke - configuration

My comp has multiple switch nodes, which works as a configurator. Is there a way I can set these configurations at once, rather than going through each switch node and selecting the values?
Example
Set the following switch values and store as 'Configuration A'.
switch a = 0
switch b = 1
switch c = 3
switch d = 2
switch e = 7
Then another set of values for Configuration B, C, D...etc?
I can then select the configurations more efficiently.
My current setup, I've just created a NoOp and added the following expression to the switches:
[value int (TRIM_SELECT.leathers)]
Which is basically the switch node with a label, but still have to configure these for each option.
Many thanks.

Related

Merge two collections excluding default values in immutable js

I want to merge two records created with the same constructor.
Record A gets initialized with values for the fields a,b,c while record B gets initialized with a value only forfoo.
The constructor has default values for all fields, so both records have a,b,c,foo as fields.
Now I want to merge Record B "on top of" A, such as the new record, will contain a,b,c from A and foo from B.
What actually happens, is that B completely overrides the values in A (admittedly, this sounds logical).
Is there a known / easy way to merge the records, excluding default values? I am thinking something along writing a function that recognizes the constructor, finds the default values from a config file, and has some logic to exclude default values, but that sounds error prone (how do I diffrentiate between a default value, and a value that is legitimate, but is exactly like the default?).
Also, I am working in an existing codebase and would like to make changes as small as possible.
i think you want mergeWith docs
it might even make sense to hang a method off of either / both type A and type B to expose your custom merge logic. this would allow you to more easily identify default values (since presumably they'll be in scope) as well as provide convenient access.
usage would look something like:
a instanceof A; //=> true
b instanceof B; //=> true
a.mergeB(b); //=> a w/ some or all of b's data
b.mergeA(a); //=> b w/ some or all of a's data

Using Google BigQuery to run multiple queries back to back

I'm currently working a project where I'm using Google Big Query to pull data from spreadsheets. I'm VERY new to SQL, so I apologize. I'm currently using the following code
Select *
From my_data
Where T1 > 1000
And T2 > 2000
So keeping the Select and From the same, I want to be able to run multiple queries where I can just keep changing the values I'm looking for between t1 and t2. Around 50 different values. I'd like for BigQuery to run through these 50 different values back to back. Is there a way to do this? Thanks!
I'm VERY new to SQL
... and I assume to BigQuery either ..., so
Below is one of the options for new users who are not familiar yet with BigQuery API and/or different clients rather than BigQuery Web UI.
BigQuery Mate adds parameters feature to BigQuery Web UI
What you need to do is
Save you query as below using Save Query Button
Notice <var_t1> and <var_t2>
Those are the parameters identifyable by BigQuery Mate
Now you can set those parameters
Click QB Mate and then Parameters to get to below form
Now you can set parameters with whatever values you want to run with;
Click on Replace Parameters OK Button and those values will appear in Editor. For example
After OK is clicked you get
So now you can run your query
To Run another round with new parameters, you need to load again your saved query to the editor by clicking on Edit Query Button
and now repeat settings parameters and so on
You can find BigQuery Mate Chrome extension here
Disclaimer: I am the author an the only developer of this tool
You may be interested in running parameterized queries. The idea would be to have a single query string, e.g.:
SELECT *
FROM YourTable
WHERE t1 > #t1_min AND
t2 > #t2_min;
You would execute this multiple times, where each time you bind different values of the t1_min and t2_min parameters. The exact logic would depend on the API through which you are using the client libraries, and there are language-specific examples in the first link that I provided.
If you are not concerned about sql-injection and just want to iteratively swap out parameters in queries, you might want to look into the mustache templating language (available in R as 'whisker').
If you are using R, you can iterate/automate this type of query with the condusco R package. Here's a complete R script that will accomplish this kind of iterative query using both whisker and condusco:
library(bigrquery)
library(condusco)
library(whisker)
# create a simple function that will create a query
# using {{{mustache}}} placeholders for any parameters
create_results_table <- function(params){
destination_table <- '{{{dataset_id}}}.{{{table_prefix}}}_results_{{{year_low}}}_{{{year_high}}}'
query <- '
SELECT *
FROM `bigquery-public-data.samples.gsod`
WHERE year > {{{year_low}}}
AND year <= {{{year_high}}}
'
# use whisker to swap out {{{mustache}}} placeholders with parameters
query_exec(
whisker.render(query,params),
project=whisker.render('{{{project}}}', params),
destination_table = whisker.render(destination_table,params),
use_legacy_sql = FALSE
)
}
# create an invocation query to provide sets of parameters to create_results_table
invocation_query <- '
SELECT
"<YOUR PROJECT HERE>" as project,
"<YOUR DATASET_ID HERE>" as dataset_id,
"<YOUR TABLE PREFIX HERE>" as table_prefix,
num as year_low,
num+1 as year_high
FROM `bigquery-public-data.common_us.num_999999`
WHERE num BETWEEN 1992 AND 1995
'
# call condusco's run_pipeline_gbq to iteratively run create_results_table over invocation_query's results
run_pipeline_gbq(
create_results_table,
invocation_query,
project = '<YOUR PROJECT HERE>',
use_legacy_sql = FALSE
)

SAP BusinessObjects - Merging dimensions with no directly related attributes

Given the following 3 queries
Query 1
SELECT
COMPONENTINFO__SOFTWARE.SOFTWARENAME,
COMPONENTINFO__SOFTWARE.SOFTWAREVERSION,
COMPONENTINFO__SOFTWARE.PARENTOID,
COMPONENTINFO__SOFTWARE.OID,
COMPONENT_VERSION_INFO.OID,
COMPONENT_VERSION_INFO.HWSERIAL,
COMPONENT_VERSION_INFO.COMPONENTID
FROM
COMPONENTINFO__SOFTWARE,
COMPONENT_VERSION_INFO
WHERE
( COMPONENTINFO__SOFTWARE.PARENTOID=COMPONENT_VERSION_INFO.OID )
Query 2
SELECT
V_MACH.OID,
V_MACH.NAME,
V_MACH.IPADDR
FROM
V_MACH
Query 3
SELECT
V_VERSIONINFO.MACHINEOID,
VM_VERSIONINFO_VERSIONINFOINFO.HWSERIAL,
VM_VERSIONINFO_VERSIONINFOINFO.OSVERSION,
VM_VERSIONINFO_VERSIONINFOINFO.PARENTOID,
VM_VERSIONINFO_VERSIONINFOINFO.OID,
COMPONENT_VERSION_INFO.PARENTOID,
V_VERSIONINFO.OID
FROM
V_VERSIONINFO,
VM_VERSIONINFO_VERSIONINFOINFO,
COMPONENT_VERSION_INFO
WHERE
( VM_VERSIONINFO_VERSIONINFOINFO.PARENTOID=V_VERSIONINFO.OID )
I'm trying to produce a report (Webi, using the rich client) that shows in 1 table:
V_MACH.NAME, COMPONENTINFO__SOFTWARE.SOFTWARENAME, COMPONENTINFO__SOFTWARE.SOFTWAREVERSION
But no matter what dimensions I merge, it won't let me put the NAME field alongside the software version fields.
I've tried to merge on:
VM_VERSIONINFO_VERSIONINFOINFO.HWSERIAL + COMPONENT_VERSION_INFO.HWSERIAL.
VM_VERSIONINFO_VERSIONINFOINFO.OID + COMPONENT_VERSION_INFO.OID (I found these represent the same values for each machine)
But nothing works.
Is the only way to do a join at the SQL level? I was hoping to avoid that but if it's the only way then that's ok.
I think what you need to do is this:
1) Create a merged dimension between V_MACH.OID in Query 2 and
V_VERSIONINFO.MACHINEOID in Query 3. Call the merged dim
"machineoid".
Create a merged dimension between
VM_VERSIONINFO_VERSIONINFOINFO.OID in Query 3 and
COMPONENT_VERSION_INFO.OID in Query 1. Call the merged dim "oid".
Create a new variable as a detail type, defined as
=[V_MACH.NAME], and its associated dimension as the merged
machineoid dimension. Call it name_detail.
Use the two merged dims in place of the
underlying dims in your report block, then add in the name_detail variable.
The reason you're having trouble is that BO can't recognize what Query 2.NAME should be associated with. By creating a detail variable, you are explicitly telling it that it is an attribute of the now-merged OID dimension.

Looking for changes between two identical mysql tables

I'm a bit struggeling to make my plan to work. I'm getting a csv export from a member administration system. The plan is to update this list to a wordpress database. In order to acomplish that, i have made two tables.
current_members AND new_members.
First I import the csv to new_members, i do a check to see if there are new members (compared with the current_members), and members that need to be deleted, if so, the member gets a new or a delete flag, (current_members.deleted = true & new_members.new = true. So far, so good. I just use this simple qyery:
UPDATE `new_memberSync` SET `new_memberSync`.`new` = '1' where `new_memberSync`.`regnr` NOT IN (select `regnr` from `current_memberSync`)
The second query to flag deleted:
UPDATE `current_memberSync` SET `current_memberSync`.`deleted` = '1' where `current_memberSync`.`regnr` NOT IN (SELECT `regnr` FROM `new_memberSync`)
Next, i want to preform a check on the content of all member's fields. If a member is updated, i want to set a updated flag. I have made a new query, its not complete yet, but i see a list of changed members. The quesstion, how do i set changed to 1 for all instances found with this query:
SELECT `new_memberSync`.`pe_code`,`new_memberSync`.`regnr`,`new_memberSync`.`voorletters`,`new_memberSync`.`roepnaam`,`new_memberSync`.`new`
FROM `new_memberSync`
JOIN `current_memberSync` ON `new_memberSync`.`regnr` = `current_memberSync`.`regnr`
WHERE `new_memberSync`.`roepnaam` <> `current_memberSync`.`roepnaam`
OR `new_memberSync`.`straatnaam` <> `current_memberSync`.`straatnaam`
OR ..
OR .. ETC.
I tried using something like : UPDATE new_membersSync SET changed = 1 ON ( HERE THE QUERY)

Prevent Duplicate headers in flat file destination - SSIS

I need some help.
I am importing some data in .csv file from an oledb source. I don't want the headers to appear twice in the destination. If i Uncheck the "Column names in first data row" property , the headers don't get populated in the first execution as well.
Output as of now.
Col1,Col2
A,B
Col1,Col2
C,D
How can I make the package run in such a way that if the file is empty , the headers get inserted. Then if the execution happens again, headers are not included,just the data.
there was a similar thread, but wasn't able to apply the solution as how to use expressions to get the number of rows of destination itself. It was long back , so I created a new.
Your help is deeply appreciated.
-Akshay
Perhaps I'm missing something but this works for me. I am not having the read only trouble with ColumnNamesInFirstDataRow
I created a package level variable named AddHeader, type Boolean and set it to True. I added a Flat File Connection Manager, named FFCM and configured it to use a CSV output of 2 columns HeadCount (int), AddHeader (boolean). In the properties for the Connection Manager, I added an Expression for the property 'ColumnNamesInFirstDataRow' and assigned it a value of #[User::AddHeader]
I added a script task to test the size of the file. It has read/write access to the Variable AddHeader. I then used this script to determine whether the file was empty. If your definition of "empty" is that it has a header row, then I'd adjust the logic in the if check to match that length.
public void Main()
{
string path = Dts.Connections["FFCM"].ConnectionString;
System.IO.FileInfo stats = null;
try
{
stats = new System.IO.FileInfo(path);
// checking length isn't bulletproof based on how the disk is configured
// but should be good enough
// http://stackoverflow.com/questions/3750590/get-size-of-file-on-disk
if (stats != null && stats.Length != 0)
{
this.Dts.Variables["AddHeader"].Value = false;
}
}
catch
{
// no harm, no foul
}
Dts.TaskResult = (int)ScriptResults.Success;
}
I looped through twice to ensure I'd generate the append scenario
I deleted my file and ran the package and only had a header once.
The property that controls whether the column names will be included in the output file or not is ColumnNamesInFirstDataRow. This is a readonly property.
One way to achieve what you are trying to do it would be to have two data flow tasks on the control flow surface preceded by a script task. these two data flow tasks will be identical except that they will be referring to two different flat file connection managers. Again, the only difference between these two would be the different values for the ColumnsInTheFirstDataRow; one true, another false.
Use this Script task to decide whether this is the first run or subsequent runs. Persist this information and check it within the script. Either you can have a separate table for this information, or use some log table to infer it.
Following solution is worked for me.You can also try the following.
Create three variables.
IsHeaderRequired
RowCount
TargetFilePath
Get the source row counts using Execute SQL task and save it in
RowCount variable.
Have script task. Add readonly variables TargetFilePath and
RowCount. Add read and write variable IsHeaderRequired.
Edit the script and add the following line of code.
string targetFilePath = Dts.Variables["TargetFilePath"].Value.ToString();
int rowCount = (int)Dts.Variables["RowCount"].Value;
System.IO.FileInfo targetFileInfo = new System.IO.FileInfo(targetFilePath);
if (rowCount > 0)
{
if (targetFileInfo.Length == 0)
{
Dts.Variables["IsHeaderRequired"].Value = true;
}
else
{
Dts.Variables["IsHeaderRequired"].Value = false;
}
}
Dts.TaskResult = (int)ScriptResults.Success;
Connect your script component to your database
Click connection manager of flat file[i.e your target file] and go
to properties. In the expression, mention the following as shown in
the screenshot.
Map the connectionString to variable "TargetFilePath".
Map the ColumnNamesInFirstDataRow to "IsHeaderRequired".
Expression for Flat file connection Manager.
Final package[screenshot]:
Hope this helps
A solution ....
First, add an SSIS integer variable in the scope of the Foreach Loop or higher - I'll call this RowCount - and make its default value negative (this is important!). Next, add a Row Count to your Data Flow, and assign the result to the RowCount SSIS variable we just made. Third, select your Connection Manager (don't double-click) and open the Properties window (F4). Find the Expressions property, select it, and hit the ellipsis (...) button. Select the ColumnNamesInFirstDataRow property, and use an expression like this:
[#User::RowCount] < 0
Now, when your package starts, RowCount has the static value of -1 or another negative number. When the data flow starts for the first time in your loop, the ColumnNamesInFirstDataRow property will have a value of TRUE. When the first data flow completes, the row count (even if it's zero) is written to the RowCount variable. On the second interation of the loop, the Connection Manager is then reconfigured to NOT write column names...