Link multiply output to color hue value input - spark-ar-studio

I'm trying to get a multiply patch output value change only the hue of a color. I want to keep saturation and luminance set to a fixed value.
With my current configuration it is only changing luminance. It looks like is changing all RGB channels equally. What would be the correct way to manipulate HSL channels individually?

After some research I found the solution. I was missing the 'pack' patch. This is a very useful patch I wasn't aware of. This is how my workflow ended up at the end:

Related

How do I mix different channels without the colors affecting one another?

I'm trying to set up a Material that I can set the color of each channel individual through a color mask. I figured that much out however I'm trying to add them back together.
It will work initially however when I change the color of one channel it will start to affect the others.
I can't seem to find a node that will allow me to mix these different channels together.
I don't have Unreal available at the moment, so I can't give you a screenshot, but I try to describe it.
Instead of multiplying the color, you could just use a Linear Interpolate node to combine them one by one.
You plug the result of Mask(R) into a Linear Interpolate node as its Alpha. You then plug the color you want the R channel to be into B. You then create the next Linear Interpolate node and plug Mask(G) into Alpha and the result of the first Linear Interpolate into A, the color you want the G channel to be into B. Proceed with the next node until everything is covered.
How does this work?
Linear Interpolate uses linear interpolation to map the values between 0 and 1 to whatever you plug into A and B. You can think of the Linear Interpolate node as a filter for a mask. If you want to combine two shapes onto another, you plug your mask-input into Alpha and everything in B will be seen instead of A when the mask is 1. You can read more about this in this article. You will find a lot of essentials under "Math Signed Distance Fields - COMBINE, BLEND, AND MASK SHAPES".

How to train YoloV4 for custom object detection on grayscale images?

I am working with images of text which have diagrams in it. My images are basically black and white I do not see why I want colors in my images. I got some decent results with default settings but I want to test on grayscale Images too. I am using this tutorial as the base which is by default using AlexyAB's repo for darknet. I think I have to change the config file as:
channels=3 # I think I have to change it to 0
momentum=0.9
decay=0.0005
angle=0 # A link says that I have to comment these all
saturation = 1.5 # This on
exposure = 1.5 # and this one too
hue=.1 # Should I change it to 0 too?
But there is this link which says that I have to comment hue,saturation,angle,exposure etc. I want to know that:
Do I have to save the images as Grayscale in directory or the code will do it by itself?
some other configuration has to be changed apart from the setting channels=1? Setting hue to 0 is also suggested in this link
Do I need to modify some function which deals with loading the images as given in this link as the load_data_detection function
Just changing channels=1 in the config file should work. If not, then comment out other parameters like angle,hue,exposure,saturation and try again
There are posts online and on the official repository suggesting to edit the channels, hue value, etc.
Here is what works for SURE (I tried),
For training set, either take Grayscale images or convert RGB to Grayscale using openCV.
cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Train the YOLOv4 AS YOU WOULD FOR RGB, i.e. without messing with channel, hue, etc.
Note:Don't forget to set the steps, batch size, number of channels, etc. as you would do normally for RGB images, just train the model on Grayscale images instead of RGB.
Few personal observations, theoretical explanation of which I couldn't find:
YOLOv4 trained on RGB images, won't work on B&W/Grayscale images.
YOLOv4 trained on B&W/Grayscale images, won't work on RGB.
Hope this helps.
Edit: I'm yet to verify whether this would be computationally more expensive than a model trained with reduced channels. Although it has shown not to reduce/improve the inference times.

Dynamically change line color dcc.Graph Plotly Dash between annotations

I'm writing a 'template creation' app using the dcc.Graph image annotation features in Plotly Dash.
The user adds multiple rectangles for specific features in the image (an invoice) and my callback captures the coordinates of each rectangle via the relayoutData variable. I want to use a different color for each rectangle, but can't figure out how to do it.
It seems like the only way to change the newshapes fillcolor
property is to replace the whole figure, but then I lose all previous shapes
All and any help appreciated.
Andrew
I have just found the following demo that does exactly what I am trying to do:
https://dash-gallery.plotly.host/dash-image-annotation/
Now to unpack the logic and adapt it to my context ... Happy 2021!
Andrew

Custom GitHub badges with dynamic color

I struggle with creating a shields.io badge which changes color dynamically.
I am able to use the JSON response to parse a text into a badge and set the color to orange:
https://img.shields.io/badge/dynamic/json.svg?label=custom&url=https://jsonplaceholder.typicode.com/posts&query=$[1].id&colorB=orange
Works well...
However, I want to change the color according to a rule. I might return the HEX color in JSON as well to be parsed into the badge. I tried the public API to get a random color and test the behavior:
http://www.colr.org/json/color/random
I get the first randomly get color with JsonPath $.colors[0].hex and place it o the badge URL as both as the dynamic value and color:
https://img.shields.io/badge/dynamic/json.svg?label=custom&url=http://www.colr.org/json/color/random&query=$.colors[0].hex&colorB=$.colors[0].hex
Regardless of the randomly chosen color, the result is always somehow green (the last generated result was #D0BB79:
I would expect something like this which correctly matches the #D0BB79 color:
How to make the color dynamic as well? The sample dynamically colorful badges are provided with Coveralls.io, Codecov.io or SonarCloud.io.
I had similar trouble, and ended up using a command line tool called anybadge which takes thresholds as parameters. This allows you to generate a badge with dynamic color in one command:
anybadge -l pylint -v 2.22 -f pylint.svg 2=red 4=orange 8=yellow 10=green
Colors can be defined by hex color code or a pre-defined set of color names.
The main difference here is that this isn't done by referencing a URL, so can't be embedded in the same way. I use this in my CI pipeline to generate various badges, then store them as project artifacts and reference them in my project README.md.

Output values in Pixel Blender (trace)

I'm absolutely new to Pixel Blender (started a couple of hours ago).
My client wants a classic folding effect for his app, I've shown him some example of folding effect via masks and he didn't like them, so I decide to dive in Pixel Blender to try to write him a custom shader.
Thx God I've found this one and I'm modyfing it by playing with values. But how can I trace / print / echo values from Pixel Blender ToolKit? This would speed up a lot all the tests I'm doing.
Here i've found in the comments that it's not possible, is it true?
Thx a lot
Well, you cannot directly trace values in Pixel Bender, but you can, for example, make a certain bitmap, then apply that filter with requested values and trace reultant bitmap's pixel values to find out what point corresponded the one selected.
For example, you make a 256x256 bitmap, with each point (x,y) having "x" red and "y" green component value. Then you apply that filter with selected values, and either display the result, or respond to clicks and trace underlying color's red and green values, which will give you the exact point on the source bitmap.