DDD aggregate design example with a many-to-many relationship - many-to-many

I would like to run through a modelling exercise to try and understand DDD a little better, specifically in the cases where there is a many-to-many relationship
Lets take an example of Xbox users and their achievements.
If you are not familiar this is the background:
Each game on Xbox has achievements.
User will have a list of games they own/have played and they can unlock achievements for these games.
There is a many-to-many relationship here as a user can have many games, and a game can have many users.
Without going further, there is definitely two aggregate roots here. User and game.
The game aggregate root has a list of achievements.
My confusion comes in with tracking the progress of a user for a specific game.
for instance what games they have, and what achievements they have for each game.
In a typical CRUD design it might look something like this:
How would you now model this is DDD?
User Game and User Achievement will have information regarding the progress of each.
This is a solution i currently think, but is it right?
User and game are both aggregate roots.
The progress of a game and it's individual achievements also seems to be connected and I would say that this is its own aggregate root. This is because like game and it's achievements, it potentially has a transaction boundary. When you progress on a game's achievement, this could effect the game's progress (e.g. unlocking an achievement would up the total game's gamerscore achieved.)
I keep hearing that you want to resolve many-to-many relationship, which is usually fine, but here, the relationship has its own data, and I don't think this relationship belongs to either.
You might be able to argue that you would put this on the user aggregate, but my concern is that it adds weight to that user aggregate and there is no reason for it to be there.
I don't think there is any transaction boundaries and I don't like the though of having to load all the user's games when you want to load it. Even if this was lazy loaded, I am concerned it might cause concurrency issues?
This is why I think the many to many relationship here is valid, and why a new aggregate root might be appropriate.
Also is the name user game progress even correct. Knowing the domain, what might you suggest?
Thank you, any help is massively appriciated.

What you are looking for here is Domain Events. From the Microsoft documentation:
Use domain events to explicitly implement side effects of changes
within your domain. In other words, and using DDD terminology, use
domain events to explicitly implement side effects across multiple
aggregates. Optionally, for better scalability and less impact in
database locks, use eventual consistency between aggregates within the
same domain.
The "progress of game" does not look to be an aggregate root to me. To update the "progress of game" you can raise a domain event such as "User_Moved" from your User aggregate and similar event from your Game aggregate. You can then subscribe to these events to update the game progress. Also, since Domain Events are in-memory you can achieve this within a single transaction.

I don't think you are going to find an authoritative answer to your question that is more specific than "it depends".
For instance - suppose the game was chess, and we want to award an achievement when a player has one at least once with each color. That's not going to be the responsibility of a "game" aggregate; we can see that immediately from the fact that the life cycles don't line up.
One quick way to sketch this suggests that we might have games raising "domain events" to announce wins, and then a little state machine that uses wins as inputs an announces awards, and then some report that lists all awards for a specific player id.
The state machine, and the accompanying domain logic that takes in wins and emits awards is probably an aggregate all by itself - one instance for each player.
The report - which combines this award with others for the same player... well, aside from some plumbing details that seems pretty anemic, unless you need to do something clever to show only "recent" awards, or something similar.
A good talk that explores these ideas: All Our Aggregates Are Wrong, by Mauro Servienti.
If you were to come at this initially, this "player awards" aggregate could either be its own aggregate or could be a list of awards under the player.
It certainly could be a list of awards within the player aggregate. But - assuming the analysis above is correct - the list of awards doesn't influence, and isn't influenced by, any other changes to player.
There's one list (which might be empty) for every player, and you'll want to be able to retrieve a specific player's list using the player id, but these conditions alone do not constrain us to consider the list a part of the player aggregate.

Related

State definition in Reinforcement learning

When defining state for a specific problem in reinforcement learning, How to decide what to include and what to leave for the definition, and also how to set difference between an observation and a state.
For example assuming that the agent is in the context of human resource and planning where it needs to hire some workers based on the demand of jobs, considering the cost of hiring them (assuming the budget is limited) is a state in the format of (# workers, cost) a good definition of state?
In total I don't know what information is needed to be in state and what should be left as it's rather observation.
Thank you
I am assuming you are formulating this as an RL problem because the demand is an unknown quantity. And, maybe [this is optional criteria] the Cost of hiring them may take into account a worker's contribution towards the job which is unknown initially. If however, both these quantities are known or can be approximated beforehand then you can just run a Planning algorithm to solve the problem [or just some sort of Optimization].
Having said this, the state in this problem could be something as simple as (#workers). Note I'm not including the cost, because cost must be experienced by the agent, and therefore is unknown to the agent until it reaches a specific state. Depending on the problem, you might need to add another factor of "time", or the "job-remaining".
Most of the theoretical results on RL hinge on a key assumption in several setups that the environment is Markovian. There are several works where you can get by without this assumption, but if you can formulate your environment in a way that exhibits this property, then you would have much more tools to work with. The key idea being, the agent can decide which action to take (in your case, an action could be : Hire 1 more person. Other actions could be Fire a person) based on the current state, say (#workers = 5, time=6). Note that we are not distinguishing between workers yet, so firing "a" person, instead of firing "a specific" person x. If the workers have differing capabilities, you may need to add several other factors each representing which worker is currently hired, and which are currently in the pool, yet to be hired so like a boolean array of a fixed length. (I hope you get the idea of how to form a state representation, and this can vary based on the specifics of the problem, which are missing in your question).
Now, once we have the State definition S, the action definition A (hire / fire), we have the "known" quantities for an MDP-setup in an RL framework. We also need an environment that can supply us with the cost function when we query it (Reward Function / Cost Function), and tell us the outcome of taking a certain action on a certain state (Transition). Note that we don't necessarily need to know these Reward / Transition function beforehand, but we should have a means of getting these values when we query for a specific (state, action).
Coming to your final part, the difference between observation and state. There are much better resources to dig deep into it, but in a crude sense, observation is an agent's (any agent, AI, human etc) sensory data. For example, in your case the agent has the ability to count number of workers currently employed (but it does not have an ability to distinguish between workers).
A state, more formally, a true MDP state must be something that is Markovian and captures the environment at its fundamental level. So, maybe in order to determine the true cost to the company, the agent needs to be able to differentiate between workers, working hours of each worker, jobs they are working at, interactions between workers and so on. Note that, much of these factors may not be relevant to your task, for example a worker's gender. Typically one would like to form a good hypothesis on which factors are relevant beforehand.
Now, even though we can agree that a worker's assignment (to a specific job) maybe a relevant feature which making a decision to hire or fire them, your observation does not have this information. So you have two options, either you can ignore the fact that this information is important and work with what you have available, or you try to infer these features. If your observation is incomplete for the decision making in your formulation we typically classify them as Partially Observable Environments (and use POMDP frameworks for it).
I hope I clarified a few points, however, there is huge theory behind all of this and the question you asked about "coming up with a state definition" is a matter of research. (Much like feature engineering & feature selection in Machine Learning).

How to create a people tracking with reidentification model?

I am currently working on a project where I want to build a model which can detect and track people with a unique ID. The main issue is when a person leaves the frame and comes back after some time. Currently, I am working with yolov4 and Deepsort to detect and track. But it is failing in this situation.
Please suggest some approach where we can do detection, reidentification and tracking of people or cars or any other object.
Thank you :)
Although YOLOv4 can detect people in an image/video stream, I think it might be too general in your case. When a person leaves the frame and comes back, ideally the model should remember seeing that person before.
One way to tackle this is to train on images of the people you want to detect.
E.g. in a system like yours, you could take multiple images of the people you want to track from different angles and label them using their unique identifiers. Afterwards you could train the model using this data (for your downstream task). This will ideally give more specific results for detecting and tracking people with their unique identifiers as opposed to the general people detection when using YOLOv4 as is..
That said, I understand that taking lots of images of people may not be practical in certain scenarios. In that case you may want to look at techniques that produce accurate results with minimal data such as domain adaptation (https://arxiv.org/abs/1812.11806). However in an application for tracking and detecting people, I'm assuming you want minimal misclassifications.. Hence you could say it's always a tradeoff.
You can find out more about dealing with lack of data in this article: (https://www.kdnuggets.com/2019/06/5-ways-lack-data-machine-learning.html)
However I think this is a better place to start for a re-identification model: (https://github.com/KaiyangZhou/deep-person-reid)
It has ample documentation to get you started..

Runtime game values class

I'm newbie on libgdx and game developing logic. So i need your advice.
Here is my problem. I'm trying to make a game and i need to know how should i collect game values. Like hp, coins, score, attack speed ect. There's two side on my game. Player and enemy. I made two classes, LeftSide(this is player) and RightSide(this is enemy). Then i notice i didn't take user name and i can not think about this. What should i do? Should one class be optimized? or Should more classes, like i said before for player and enemy and maybe user's names.
Edit:
I ask this question because i don't know whis is most good for performance.
Option 2 is cleaner, if only because it can be factored down to two instances of a player class. Option 1 will be harder to maintain and will likely have duplicated logic.

Chess Engine - Confusion in engine types - Flash as3

I am not sure this kind of question has been asked before, and been answered, by as far as my search is concerned, I haven't got any answer yet.
First let me tell you my scenario.
I want to develop a chess game in Flash AS3. I have developed the interface. I have coded the movement of the pieces and movement rules of the pieces. (Please note: Only movement rules yet, not capture rules.)
Now the problem is, I need to implement the AI in chess for one player game. I am feeling helpless, because though I know each and every rules of the chess, but applying AI is not simple at all.
And my biggest confusion is: I have been searching, and all of my searches tell me about the chess engines. But I always got confused in two types of engines. One is for front end, and second is real engines. But none specifies (or I might not get it) which one is for which.
I need a API type of some thing, where when I can get searching of right pieces, and move according to the difficulty. Is there anything like that?
Please note: I want an open source and something to be used in Flash.
Thanks.
First of all http://nanochess.110mb.com/archive/toledo_javascript_chess_3.html here is the original project which implements a relatively simple AI (I think it's only 2 steps deep) in JavaScript. Since that was a contest project for minimal code, it is "obfuscated" somewhat by hand-made reduction of the source code. Here's someone was trying to restore the same code to a more or less readable source: https://github.com/bormand/nanochess .
I think that it might be a little bit too difficult to write it, given you have no background in AI... I mean, a good engine needs to calculate more then two steps ahead, but just to give you some numbers: the number of possible moves per step, given all pieces are on the board would be approximately 140 at max, the second step thus would be all the combination of these moves with all possible moves of the opponent and again this much combinations i.e. 140 * 140 * 140. Which means you would need a very good technique to discriminate the bad moves and only try to predict good moves.
As of today, there isn't a deterministic winning strategy for chess (in other words, it wasn't solved by computers, like some other table games), which means, it is a fairly complex game, but an AI which could play at a hobbyist level isn't all that difficult to come up with.
A recommended further reading: http://aima.cs.berkeley.edu/
A Chess Program these days comes in two parts:
The User Interface, which provides the chess board, moves view, clocks, etc.
The Chess Engine, which provides the ability to play the game of chess.
These two programs use a simple text protocol (UCI or XBoard) to communicate with the UI program running the chess engine as a child process and communicating over pipes.
This has several significant advantages:
You only need one UI program which can use any compliant chess engine.
Time to develop the chess engine is reduced as only a simple interface need be provided.
It also means that the developers get to do the stuff they are good at and don't necessarily have to be part of a team in order to get the other bit finished. Note that there are many more chess engines than chess UI's available today.
You are coming to the problem with several disadvantages:
As you are using Flash, you cannot use this two program approach (AFAIK Flash cannot use fork(). exec(), posix_spawn()). You will therefore need to provide all of the solution which you should at least attempt to make multi-threaded so the engine can work while the user is interacting with the UI.
You are using a language which is very slow compared to C++, which is what engines are generally developed in.
You have access to limited system resources, especially memory. You might be able to override this with some setting of the Flash runtime.
If you want your program to actually play chess then you need to solve the following problems:
Move Generator: Generates all legal moves in a position. Some engine implementations don't worry about the "legal" part and prune illegal moves some time later. However you still need to detect check, mate, stalemate conditions at some point.
Position Evaluation: Provide a score for a given position. If you cannot determine if one position is better for one side than another then you have no way of finding winning moves.
Move Tree and pruning: You need to store the move sequences you are evaluating and a way to prune (ignore) branches that don't interest you (normally because you have determined that they are weak). A chess move tree is vast given every possible reply to every possible move and pruning the tree is the way to manage this.
Transpotion table: There are many transpositions in chess (a position reached by moving the pieces in a different order). One method of avoiding the re-evaluation of the position you have already evaluated is to store the position score in a transposition table. In order to do that you need to come up with a hash key for the position, which is normally implemented using Zobrist hash.
The best sites to get more detailed information (I am not a chess engine author) would be:
TalkChess Forum
Chess Programming Wiki
Good luck and please keep us posted of your progress!

Coding Competition, language agnostic guidelines?

I might be doing a coding competition soon, I was wondering if anyone made one and what where the guidelines/ process.
I'd like to make the competition appealing to all devs, and I m trying to come up with ideas as to how.
the scenario is: There is an event running and we(of the coding competition) will have a room that we can use (either to code or for questions, etc), however, ideally the task for the competition should be assignet and they should eb able to go and do other things, if they are so inclined.
what i wonder is what kind of challenges to give, and most importantly, what is the criteria to "win" teaching and learning good coding standards takes a looong time, and I d like to think that if you ve been coding for longer you ll do things right and quick... but in a competition, you would be cutting corners...
I would really appreciate your input on this
A competition that's appealing to all devs? That sounds... difficult. But if you want to make the competition about problem solving and algorithms, then I am a big fan of the Sphere Online Judge. Basically this is a repository of programming puzzles but you can also become a problem setter and create problems or contests on the site.
It supports a huge number of programming languages, from the "popular" ones to more obscure ones. Programs will typically read from standard in, and write to standard out. The standard judge program will simply diff a program's output with the expected output, but more elaborate judges are possible. You also set a time limit for the execution of submissions, which usually requires programmers to be more clever than brute force.
Winner is whoever solved the most problems. Ties are broken by time of correct submissions, with some time penalty for wrong submissions.
Guidelines
Limit the languages which can be submitted. If you don't, you may get proprietary languages which require a special purchased compiler or some other inconvenience.
Correctness
This is easy. Provide an easy-to-read unit test in all languages you will accept. This will allow simple, automatic testing of submissions, and will guide the interface of the solution.
Challenges
Create a theme. Make it focused, but not too specific as to require certain paradigms or language features. Then develop challenges based around that theme.
Assign points to each challenge. Give more points to more difficult problems. Be sure to review each challenge carefully and have a team attempt them before giving points so you can make a more accurate decision.
As #miorel mentioned in his answer, time limits and memory limits are wonderful. Set a time limit per test per challenge, or at least monitor them and have these metrics contribute to the points given for solving the problem.
You should look at the ACM competition. Each year they have collegiate programming competitions. These are language agnostic. The archive is located here.
http://www.ntnu.edu.tw/acm/
To start improve yourself, you need a project you can work on, a problem you want to solve, something you want to achieve. Without any context and a destination you want to end, you won't be able to learn all the necessary methods and all the connections of a language.
There is a competition, taking place 2 times a year, it is called ludum dare.
It also doesn't matter which language you are writing, you just have to create a game within 48h ( compo, just one person and all assets created by yourself ) and 72h ( jam, a team working together, can purchase assets ). After the competition when every uploaded his game, the voting starts. This will take like 20 days where everybody can upvote your game or you can upvote other peoples' games. There are taking part approximately 3000 people.
Every time the competition starts, there is a voting on 5 days in sequence. Everyday you vote on a set of themes which can be possibly the theme you will have to create a game for. My last competition had the theme "unconventional weapon". After the voting ended, the competition starts and you have to think about a game with (in my case ) an unconventional weapon and start coding a game you like.
This is not about being the best, you should start looking at other peoples projects after the competition ended. You can learn a lot of other people, ways they solve their problem and I am sure you would improve your self every time taking place in such a contest.
It's gonna be hard to designed a coding competition suitable for a wide array of languages, since languages typically serve different purposes. I'd suggest that what you're looking for doesn't exist.