Im rather new to GANs and confused which GAN model would suite the best for this use case:
I have a dataset which contains pair images of men that have NO_BEARD and BEARD.
I want to train a GAN with those paired images and in the end I want to feed the NN with an input image and want a generated output image.
I think it might be an Image-to-Image translation GAN or CycleGAN for that purpose.
CycleGAN being a good choice but the reason cycleGAN came out was because paired data is not always possible to collect. If you use that you will unnecessarily train a model which will have to learn A->B translation and also B->A translation when you don't want it to learn B->A translation. Since you have paired data I would suggest you to use pix2pix GAN. You can checkout this github repository.
Here is a link to state-of-the art models for image2image translation. CycleGan is may be the most famous and easiest to use.
I wanted to train my own custom Glove representations from using many PDF files. How can i do that ? and is there any way to use the concept of POS tagging and dependency parsing etc? Can you suggest any link for implementing that?
Your question is overbroad to give any tight answers, but of course you can do what you describe.
You'd 1st look into libraries for extracting plain text from PDFs.
Some word2vec projects have trained word-vectors based on word-tokens that have been extended with POS-labels, or dependency-defined contexts, with potential benefits depending on your goals. See for example Levy & Goldberg's paper on dependency-based embeddings:
https://levyomer.wordpress.com/2014/04/25/dependency-based-word-embeddings/
I am trying to extend LDA model by adding another layer of locations.
Is it possible to add another layer to Mallet? if so, which classes should I extend?
The process I'm trying to model:
1. Choose a region
2. Choose a topic
3. Choose a word
The cc.mallet.topics.SimpleLDA class is intended for use as a base for development of new models: https://github.com/mimno/Mallet/blob/master/src/cc/mallet/topics/SimpleLDA.java
There may be alternatives to designing a new model from scratch. If region completely determines the distribution over topics and each document comes from one region, you could simply merge all documents from a region. If each document has one or more regions, you could consider regions as "authors" and implement the Author-Topic model. If you want to measure a more indirect relationship between regions and topics you might try a Dirichlet-Multinomial Regression (DMR) model.
I started to study ER diagram when i browse through ER diagram tutorials i found something like figure 1 and i learned
Figure 1
And then i tried to create a sample ER Diagram in mysql workbench i got the components like in below diagram
Figure 2
Then i browsed in Google images as ER Diagram i got both types of images... I dont know the similarities and difference between both diagrams..
Can u please help me to understand in detail and to move further...
Thanks in Advance...
Then i browsed in Google images as ER Diagram i got both types of
images... I dont know the similarities and difference between both
diagrams..
Can u please help me to understand in detail and to move further...
Thanks in Advance...
By developing databases, a DBA (or someone else) can use a Data Modeling technique known as Entity-Relationship Diagram.
This technique (as mentioned in other answers) was developed by an American (?) named Peter Chen and it is widely used today for the development of database structures such as tables and present relations between them.
The first image shown represents a Conceptual Model of a due problem/situation. The second image is a Physical Model of a problem/situation. Both models are part of the whole concept of data modeling by Peter Chen, the Entity-Relationship Diagram.
They (the models) represent stages in the course of a problem/situation. As you get a description of the problem/situation, you begin to develop the Conceptual Model of it. Once ready, the model is decomposed to become a new model called the Logical Model.
The Logical Model is also subsequently decomposed, resulting in the Physical Model, a final representation of the structures of the tables of a database containing the field names, their data types, relationships between tables, primary keys, foreign keys , and so on.
The decomposition process follows strict rules proposed by Peter Chen. This says that you do not draw nonsense. You make a model and need to follow rules to break it down so that you pass to the next stage.
You can see the Entity-Relationship Diagram as a tool or technique that helps you to develop a strong and concise database structure. With this technique, you can create a Model (3 actually) expressing business rules needed in a system/web application. However, remember the following things:
Even before the existence of the Conceptual Model, we have to
describe the problem (the needs of business rules) on a paper (or a
document). It is here that you will generate/start a Conceptual
Model. This document may even be a new phase, prior to the Conceptual
Model, called Descriptive Model (this is not official by Peter Chen).
That's where you will have the context of everything.
The context should be given only for what should be persisted (in a
database). There is no need to describe things that are not
persisted. Your Descriptive Model should not contain unnecessary
things.
During the development of the Conceptual Model, it is crucial that
you forget completely what are tables and foreign keys. These things
will only slow you down during this phase of development. They should
be seen later, during the next stages.
I advise you to find out more about the Entity-Relationship Model (A.K.A. Entity-Relationship Diagram), and study about it. There are cool books on the subject, and a lot of material on the Internet. Having grasped this, believe me, the development of databases will become something much easier and enjoyable.
If you have major questions, please make a comment and I will answer. Come join the community. Follow the entity-relationship tag. There are many interesting questions that can help you in your studies. Also, keep asking, keep participating. We are here to exchange knowledge!
Oh, one more thing. There are certain different notations used by different professionals. For example, some people represents cardinalities as N...1, as other N-1, other as (N,1). These characteristics do not change the end result.
EDIT
I thank who showed me this.
Your first diagram is a proper ER diagram, using the concepts and notation developed by Peter Chen in his paper The Entity-Relationship Model - Toward a Unified View of Data. This notation depicts both entities (rectangles) and relationships (diamonds). Ternary and higher relationships are easily represented and visible in this notation.
Your second diagram is commonly called an ER diagram. It doesn't distinguish entities from relationships, rather the applications that produce these diagrams tend to confuse tables with entities and relationships with foreign key constraints. These diagrams have more in common with the network data model than with the entity-relationship model, since they depict only binary relationships between tables rather than n-ary relationships between entities.
Figure 1 is a Entity Relationship Diagram it shows abstract relationships and attributes between entities.
Figure 2 is a Relational Schema Diagram which goes a step further and specifies foreign keys, data types of attributes, and one-to-many/many-to-many relationships.
Both are conceptual designs of databases and honestly there are some aspects added and removed from each one.
The first one is an Entity-Relationship diagram, although it's awfully specific and a lot of that cruft can be omitted. There are simple conventions you can use to declare relationships between tables, like arrows that mean "one to one", "one to many", or "many to many" but I've found that most of the time simply knowing the relationship exists is good enough.
Here's an example of a very high-level ERD that simply establishes the connection between different parts of your system:
There's usually no need to get into specifics at this point. Anyone not familiar with the project will immediately get a sense of how your data is structured and if they want to know more about implementation they can dig into the database level.
The second artifact there is a database diagram and is generally very specific in terms of details.
It's often easier to design your application starting with a very simple ERD, iterate on that until you're happy, then implement it in terms of database tables, fields and relationships later. This is when you'd use a database design tool to implement it if you prefer.
I was reading about DSLs (Martin Fowler's book) and in the first chapter he talks about Semantic and Adaptive models. I dont really understand what these terms mean in the context of DSLs. I tried searching and reading more about them but I still dont quite get it since the explanations are also kind of complex. I would really appreciate if someone could explain these to me in simple terms. Thanks.
These are both patterns explained in a bit more detail later in the same book and have links in Fowler's on-line DSL Pattern Catalog (though these provide little information beyond pointers to the locations in the book). Semantic Model is detailed in Chapter 11 and Adaptive Model is in Chapter 47.
Basically, a Semantic Model is a model tightly tied to the language, describing the same domain of knowledge that the language does, is created pretty directly by the parser. Using one is generally recommended for separating the parsing logic from the semantic logic.
Adaptive Model is a technique for defining an alternative computational model (i.e., a computational model not normally done in the host language), and is sometimes actually a Semantic Model modeling a computational DSL.