How do I mix different channels without the colors affecting one another? - unreal-blueprint

I'm trying to set up a Material that I can set the color of each channel individual through a color mask. I figured that much out however I'm trying to add them back together.
It will work initially however when I change the color of one channel it will start to affect the others.
I can't seem to find a node that will allow me to mix these different channels together.

I don't have Unreal available at the moment, so I can't give you a screenshot, but I try to describe it.
Instead of multiplying the color, you could just use a Linear Interpolate node to combine them one by one.
You plug the result of Mask(R) into a Linear Interpolate node as its Alpha. You then plug the color you want the R channel to be into B. You then create the next Linear Interpolate node and plug Mask(G) into Alpha and the result of the first Linear Interpolate into A, the color you want the G channel to be into B. Proceed with the next node until everything is covered.
How does this work?
Linear Interpolate uses linear interpolation to map the values between 0 and 1 to whatever you plug into A and B. You can think of the Linear Interpolate node as a filter for a mask. If you want to combine two shapes onto another, you plug your mask-input into Alpha and everything in B will be seen instead of A when the mask is 1. You can read more about this in this article. You will find a lot of essentials under "Math Signed Distance Fields - COMBINE, BLEND, AND MASK SHAPES".

Related

Link multiply output to color hue value input

I'm trying to get a multiply patch output value change only the hue of a color. I want to keep saturation and luminance set to a fixed value.
With my current configuration it is only changing luminance. It looks like is changing all RGB channels equally. What would be the correct way to manipulate HSL channels individually?
After some research I found the solution. I was missing the 'pack' patch. This is a very useful patch I wasn't aware of. This is how my workflow ended up at the end:

AS3 - How to calculate intersection between drawing and bitmap

I'm trying to create a handwriting game with AS3 on Adobe Animate. I've created my board, functions(drawing, erasing, saving, printing and color pannel) so far. But i need to show a score. To do it i thought if i can calculate the percentege of intersection between drawing and a bitmap image(which is my background for now).
Is there any way to do it? Or can you at least tell me with which function should i try that? Thanks a lot.
Note: Here is 2 images from my game. You can easily understand what am i trying to explain and do.
players will try to draw correctly(drawn board)
Empty Board
just a suggestion,
lets assuming that you are recording draw data, a set of points according the frame rate that records mouse positions inside an array.
i used 8 points in my own example, the result would be like this: (6 of 8 = 75% passed)
► black line is correct path(trace btimap) ► red is client draw
we need to search whole of the points array and validate them, so a percentage will be gain easily
how to validate
each point contain x and y, to check if its placed on a black pixel (bitmap trace) we just do
if (bitmapData.getPixel(point.x, point.y) == 0x0) // 0x0 is black
getPixel returns an integer that represents an RGB pixel value from a
BitmapData object at a specific point (x, y). The getPixel() method
returns an unmultiplied pixel value. No alpha information is returned.
Improvment
this practice would be more accurate when there is really more captured points during draw, also the Trace-Bitmap must be like this (above image), not a Dashed (smoothed, styled, ...) Line, however you can use this trace bitmap in background (invisible) and only present a dashed copy of that with a colorful background (like grass and rock textures or any graphical improves) to players.
Note
also define a maximum search size if you need more speed for validating draw. this maximum will be used to ignoring some points, for example if max=5 and we have 10 points, 0,2,4,6,8 can be ignored

Libgdx - 3d particle blending issue

I try to create simple particle system containing 2 emitters(controllers) : 1 for fire and 1 for smoke. i'm doing it problematically and not using editor. I started with fire and create BilboardParticleBatch set texture and the result was not good at all. The reason is that inside batch the blending function is not correct. So I override it in my own batch and change it to GL_SRC_ALPHA, GL_ONE - this produce better results.
So now using my batch the fire is looking good. Then I want to add smoke, so here I want to use Not pre-multiplied blending, so again I created new batch extending BillboardparticleBath and make it to use SRC_ALPHA, ONE_MINUS_SRC_ALPHA and assign it smoke texture. So the smoke is perfect.
The main problem is how to merge both controllers. If I put them both to ParticleEffect and put this Effect in ParticleSystem then I'm able to render them together but the problem is that when particle from both emitters overlaps, sometimes they have wrong blending. It is because probably they are not sorted together as they use 2 batches. The sort is applied not by effect level but by batch level. So this is not a sollution :(
I can try using only one batch and use TextureRegion influencer in order to use different textures for smoke and fire inside 1 batch, but how to solve problem with different blending as it will be one and the same for the batch.
Is it a way somehow to merge both batches in one and tell it to render fire using additive and smoke using just alpha.
Thanks in advance!
Problem is solved. Now it is working fine with 2 batches. I just add depth testing function to the material when override the particle batch.

Output values in Pixel Blender (trace)

I'm absolutely new to Pixel Blender (started a couple of hours ago).
My client wants a classic folding effect for his app, I've shown him some example of folding effect via masks and he didn't like them, so I decide to dive in Pixel Blender to try to write him a custom shader.
Thx God I've found this one and I'm modyfing it by playing with values. But how can I trace / print / echo values from Pixel Blender ToolKit? This would speed up a lot all the tests I'm doing.
Here i've found in the comments that it's not possible, is it true?
Thx a lot
Well, you cannot directly trace values in Pixel Bender, but you can, for example, make a certain bitmap, then apply that filter with requested values and trace reultant bitmap's pixel values to find out what point corresponded the one selected.
For example, you make a 256x256 bitmap, with each point (x,y) having "x" red and "y" green component value. Then you apply that filter with selected values, and either display the result, or respond to clicks and trace underlying color's red and green values, which will give you the exact point on the source bitmap.

Best way to be able to pick multiple colors/designs of symbols dynamically from flash

Sorry the title's so convoluted... I must've tried for ten minutes to get a good, descriptive title! Basically, here's the scenario.
Let's say a user can pick fifty different hat colors and styles to put on an avatar. The avatar can move his head around, so we'd need the same types of movements in the symbol for when that happens.
Additionally, it gets which hat should be on the 'avatar' from a database. The problem is that we can't just make 50 different frames with a different hat on each. And each hat symbol will have the same movements, it'll just be different styles, colors and sizes.
So how can I make one variable that is the HAT, that way we can just put the appropriate hat symbol into the variable and always be able to call Hat.gotoAndplay('tip_hat') or any other generic functions.... Does that make sense?
Hope that's not too confusing. Sorry, I'm not great at the visual Flash stuff, but it's gotta be done! Thanks!
debu's suggestion about a hat container makes sense in order to separate out control of the hat movement.
You could take this further by separating out different aspects of the appearance of each hat (not just the colours, but also style, pattern, size, orientation etc) - this would allow you produce a wide variety of different hats from just a few parameters.
So for example 6 styles x 4 patterns x 8 colours = 192 different hats (without having to draw each one!)
(source: webfactional.com)
You could do that a number of ways; firstly you could have each different hat as a different symbol in the Flash Library (if you're using the IDE), and then in their properties tick to 'Export for Actionscript', and choose some appropriate name. It'll tell you that there's no definition for the class path, and one will be created automatically (or something), but that's no problem as you don't need to create a class file for these objects - they're simply MovieClip extensions with some specific data in them.
So if you do that with each hat, let's say you name them Hat_1, Hat_2, etc; then you need to create a 'hat' object inside your avatar's head object. Whenever the hat is changed, you call a new instance of that specific hat object, and put it on the stage:
//when user chooses a hat, however this is done:
var newHat:Hat_1 = new Hat_1();
avatarBody.avatarHead.hat.addChild(newHat);
Then that hat symbol gets added to the hat object of your avatar, and will move with the head object as you'd expect. You can change up the hat on the fly, by simply calling a different hat type and removing the previous one.
Alternatively you could do it by loading in the hat symbols from external images, and storing them in variables for when they need to be added to the avatar object. You'd do this using XML; if you don't know how that's done, I can explain.