Use the measurement standards or give users the choice - units-of-measurement

I deal with measurements in my application (height, weight, etc). All of the equations I've found use the international standards (kg, cm). I can easily do the conversions in the code, but I'm wondering if I give users the option or do I make them do the conversions themselves if they don't wish to use the standard?
Some similar programs I've seen (from the U.S.) only allow feet and inches for height and pounds for weight.

Without more (that is, whether your application's users prefer metric over U.S. units or vice versa, and probably the
geographic scope of those users, or whether one unit system is more "authoritative" than another), it would be appropriate to give users the choice of measurement system.
If you want to use both international (kg and cm) and U.S. standards (feet, inches, and pounds), you should try defining
a base unit, which is the smallest unit representable in both unit systems, and perform operations on those base units rather
than on metric or U.S. units. See this question: Metric and Imperial internal representation
This can be important if, for example, your application receives or measures heights and weights that could be in either metric or U.S.
units.

Related

Defining observation space and reward for traffic signal phase optimization for reinforcement learning

I am trying to use Reinforcement Learning for traffic signal phase optimization for improving traffic flow at intersections.
I am aware that in practice we won't be able to get the information about all the vehicles in each of the lanes.
If we use a camera for getting information about the queue length then we can get accurate data only upto, say 200 meters.
Should I take this into consideration while defining my observation space or can I directly use the data from sumo?
Furthermore, what should be the ideal observation space for such a task?
sumo_rl allows to use various metrics for reward calucation such as pressure metric, queue length metric, etc. What will be a good choice of rewards for my use case or what factors should I consider while defining my reward?
I have tried getting metrics from the e2 detector's output file such as throughput, lane delay and queue length. For the agent however, I might not be able to use them (as traci/sumo wrappers offer better implementations?) So how do I use traci for getting this modified information?
Yes, you should try to match your observation space as close to the real world as possible. SUMO can also filter the data directly (for instance with an E3 detector).
If you want to maximize flow than the reward should also include the flow metric (throughput). It's quite easy to get it via traci (as you already noticed) but I cannot tell how it integrates with your framework since you did not give details about it.

How can we design rewards for an RL algorithm to incentivize a group metric?

I am using designing a reinforcement learning agent to guide individual cars within a bounded area of roads. The policy determines which route the car should take.
Each car can see the cars within 10 miles of it, their velocities, and the road graph of the whole bounded area. The policy of the RL-based agent must determine the actions of the cars in order to maximize the flow of traffic, lets say defined by reduced congestion.
How can we design rewards to incentivize each car to not act greedily and maximize just its own speed, but rather minimize congestion within the bounded area overall?
I tried writing a Q-learning based method for routing each vehicle, but this ended up compelling every car to greedily take the shortest route, producing a lot of congestion by crowding the cars together.
It's good to see more people working on cooperative MARL. Shameless plug for my research effort, feel free to reach out to discuss.
I think you need to take a step back for your question. You ask how to design the rewards so the agents will benefit the environment rather than themselves. Now, if you wanted, you could have just given each agent a reward based on the total welfare of the population. This will probably work, and you probably won't want that because it defeats the purpose of a multi-agent environment, right?
If you want the agents to be selfish but somehow converge to a cooperative solution, this is a very difficult problem (which is what I'm working on.)
If you're okay with a compromise, you could use intrinsic motivation, like in these papers:
Jaques 2018: Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning
Vinitsky 2021: A learning agent that acquires social norms from public sanctions in decentralized multi-agent settings
Hughes 2018: Give agents a reward when there's low inequality in the population
What all of these papers have in common is that they add another component to the reward of each agent. That component is pro-social, like incentivizing the agent to increase its influence over the actions of other agents. Still it's a less extreme solution than just making the reward be social welfare directly.

Using HMM for offline character recognition

I have extracted features from many images of isolated characters (such as gradient, neighbouring pixel weight and geometric properties. How can I use HMMs as a classifier trained on this data? All literature I read about HMM refers to states and state transitions but I can't connect it to features and class labeling. The example on JAHMM's home page doesn't relate to my problem.
I need to use HMM not because it will work better than other approaches for this problem but because of constraints on project topic.
There was an answer to this question for online recognition but I want the same for offline and in a little more detail
EDIT: I partitioned each character into a grid with fixed number of squares. Now I am planning to perform feature extraction on each grid block and thus obtain a sequence of features for each sample by moving from left to right and top to bottom.
Would this represent an adequate "sequence" for an HMM i.e. would an HMM be able to guess the temporal variation of the data, even though the character is not drawn from left to right and top to bottom? If not suggest an alternate way.
Should I feed a lot of features or start with a few? how do I know if the HMM is underforming or if the features are bad? I am using JAHMM.
Extracting stroke features is difficult and cant be logically combined with grid features? (since HMM expects a sequence generated by some random process)
I've usually seen neural networks used for this sort of recognition task, i.e. here, here here, and here. Since a simple google search turns up so many hits for neural networks in OCR, I'll assume you are set in using HMMs (a project limitation, correct?) Regardless, these links can offer some insight into gridding the image and obtaining image features.
Your approach for turning a grid into a sequence of observations is reasonable. In this case, be sure you do not confuse observations and states. The features you extract from one block should be collected into one observation, i.e. a feature vector. (In comparison to speech recognition, your block's feature vector is analogous to the feature vector associated with a speech phoneme.) You don't really have much information regarding the underlying states. This is the hidden aspect of HMMs, and the training process should inform the model how likely one feature vector is to follow another for a character (i.e. transition probabilities).
Since this is an off-line process, don't be concerned with the temporal aspects of how characters are actually drawn. For the purposes of your task, you've imposed a temporal order on the sequence of observations with your the left-to-right, top-to-bottom block sequence. This should work fine.
As for HMM performance: choose a reasonable vector of salient features. In speech recog, the dimensionality of a feature vector can be high (>10). (This is also where the cited literature can assist.) Set aside a percentage of the training data so that you can properly test the model. First, train the model, and then evaluate the model on the training dataset. How well does classify your characters? If it does poorly, re-evaluate the feature vector. If it does well on the test data, test the generality of the classifier by running it on the reserved test data.
As for the number of states, I would start with something heuristically derived number. Assuming your character images are scaled and normalized, perhaps something like 40%(?) of the blocks are occupied? This is a crude guess on my part since a source image was not provided. For an 8x8 grid, this would imply that 25 blocks are occupied. We could then start with 25 states - but that's probably naive: empty blocks can convey information (meaning the number of states might increase), but some features sets may be observed in similar states (meaning the number of states might decrease.) If it were me, I would probably pick something like 20 states. Having said that: be careful not to confuse features and states. Your feature vector is a representation of things observed in a particular state. If the tests described above show your model is performing poorly, tweak the number of states up or down and try again.
Good luck.

Using 10-node tetrahedron, is strain continuous between neighbouing tetrahedons?

I'm trying to implementing a Finite Element Analysis algorithm. I solve K u = f to get the displacement u, and then calculate strain with u, then calculate the stress. Finally, I use the stress to calculate the Von Mises Stress, and visualize this. From the result I find the strain is not continuous between tetrahedrons.
I use 10 nodes tetrahedron as the element, so the displacement is a second-order polynomial in every element. The displacement should be enforced to be continuous between tetrahedrons. And the strain, which is the first order derivatives of the displacements should be continuous inside every tetrahedron. But I'm not sure: is this true across the interface between tetrahedrons?
Only the components of strain tangent to the adjoining face are guaranteed continuous.
This follows from the displacement continuity, when you take derivatives in the direction of the interface they are the same.
Commercial FEM programs typically do some post process averaging to make the other components look continuous. Note the strain components normal to an element boundary are only expected to be continuous if the underlying constitutive model is continuous, so such averaging is not always appropriate.
You should not compute the stress and strain at the nodes but inside the elements. You can choose for example 4 Gauss points and compute the values there. You then have to think about a scheme on how to get the values computed at the Gauss points onto the tet nodes.
There is a Mathematica application example which illustrates this. Unfortunately the web page is no longer available, but the notebooks are here. You'll find the example in the application example section under Finite Element Method, Structural Mechanics 3D (in the old HelpBrowser). If you have difficulties I could convert it to PDF and send it you.

Coarse-grained vs fine-grained

What is the difference between coarse-grained and fine-grained?
I have searched these terms on Google, but I couldn't find what they mean.
From Wikipedia (granularity):
Granularity is the extent to which a
system is broken down into small
parts, either the system itself or its
description or observation. It is the
extent to which a larger entity is
subdivided. For example, a yard broken
into inches has finer granularity than
a yard broken into feet.
Coarse-grained systems consist of
fewer, larger components than
fine-grained systems; a coarse-grained
description of a system regards large
subcomponents while a fine-grained
description regards smaller components
of which the larger ones are composed.
In simple terms
Coarse-grained - larger components than fine-grained, large subcomponents. Simply wraps one or more fine-grained services together into a more coarse­-grained operation.
Fine-grained - smaller components of which the larger ones are composed, lower­level service
It is better to have more coarse-grained service operations, which are composed by fine-grained operations
Coarse-grained: A few ojects hold a lot of related data that's why services have broader scope in functionality. Example: A single "Account" object holds the customer name, address, account balance, opening date, last change date, etc.
Thus: Increased design complexity, smaller number of cells to various operations
Fine-grained: More objects each holding less data that's why services have more narrow scope in functionality. Example: An Account object holds balance, a Customer object holds name and address, a AccountOpenings object holds opening date, etc.
Thus: Decreased design complexity , higher number of cells to various service operations.
These are relationships defined between these objects.
One more way to understand would be to think in terms of communication between processes and threads. Processes communicate with the help of coarse grained communication mechanisms like sockets, signal handlers, shared memory, semaphores and files. Threads, on the other hand, have access to shared memory space that belongs to a process, which allows them to apply finer grain communication mechanisms.
Source: Java concurrency in practice
In the context of services:
http://en.wikipedia.org/wiki/Service_Granularity_Principle
By definition a coarse-grained service operation has broader scope
than a fine-grained service, although the terms are relative. The
former typically requires increased design complexity but can reduce
the number of calls required to complete a task.
A fine grained service interface is about the same like chatty interface.
In term of dataset like a text file ,Coarse-grained meaning we can transform the whole dataset but not an individual element on the dataset While fine-grained means we can transform individual element on the dataset.
Coarse-grained and Fine-grained both think about optimizing a number of servicess. But the difference is in the level. I like to explain with an example, you will understand easily.
Fine-grained: For example, I have 100 services like findbyId, findbyCategry, findbyName...... so on. Instead of that many services why we can not provide find(id, category, name....so on). So this way we can reduce the services. This is just an example, but the goal is how to optimize the number of services.
Coarse-grained: For example, I have 100 clients, each client have their own set of 100 services. So I have to provide 100*100 total services. It is very much difficult. Instead of that what I do is, I identify all common services which apply to most of the clients as one service set and remaining separately. For example in 100 services 50 services are common. So I have to manage 100*50 + 50 only.
Coarse-grained granularity does not always mean bigger components, if you go by literally meaning of the word coarse, it means harsh, or not appropriate. e.g. In software projects management, if you breakdown a small system into few components, which are equal in size, but varies in complexities and features, this could lead to a coarse-grained granularity. In reverse, for a fine-grained breakdown, you would divide the components based on their cohesiveness of the functionalities each component is providing.
coarse grained and fine grained. Both of these modes define how the cores are shared
between multiple Spark tasks. As the name suggests, fine-grained mode is
responsible for sharing the cores at a more granular level. Fine-grained mode has been deprecated by Spark and will soon be removed.
Corse-grained services provides broader functionalities as compared to fine-grained service. Depending on the business domain, a single service can be created to serve a single business unit or specialised multiple fine-grained services can be created if subunits are largely independent of each other.
Coarse grained service may get more difficult may be less adaptable to change due to its size while fine-grained service may introduce additional complexity of managing multiple services.
Granularity has an important application while storing large scale data where space is very important.
The meaning of granularity according to Oxford dictionary is -
"The scale or level of detail in a set of data."
According to Cambridge dictionary -
"A lot of small details included in information, making it possible for you to understand very clearly what is happening"
So from the word specific meaning, it is some kind of partition of data for a continuous process.
Finer granularity consists of small interval partition, so that detailed representation can be achieved.
On the other hand, coarser granularity is larger frame interval, so that it can save storage.
Uses of two types of granularity is application specific.
For example- If we have an application, where recent time information is more important than the older information. For detailed representation of recent data can be found by finer granularity, while for older data representation we can use coarser granularity.