I am relatively new to shacl and sparql and stuck with a problem.
I have an ontology which contains information about factory plant.
A part of this ontology is the description of room as individuals.
The individuals can be created by any person and I need to check now if their creation is valid. e.g The room-name is in a giving pattern, the floor number is an integer.....
I'm writing this validation in shacl but stuck.
How can I check if a room number is already existing in the ontology?
The room number is defined as a data property over "hasRoomNr".
Shortform: I want to find duplicate values and create an error.
I'm working with protege and read a lot about shacl. Also that it is not directly possible to compare data values.
My plan is now to combine shacl and sparql to write a query to search for that. But I am stuck to compare values. I managed to write a sparql query which gives me all the room numbers.
Now I need to find a way to compare them and create a validations report out off interferences.
Is it even possible to do it?
Related
I am in the process of writing an advanced search function using Spring boot and MySQL for a Book Management system.
My Book object contains various information such as material id,book name, author, publisher, description, product type (as in a story book or a reference material etc.)
I managed to write an ExampleMatcher as follows;
ExampleMatcher exampleMatcher = ExampleMatcher.matchingAny().
withIgnoreCase()
.withIgnorePaths("material_id")
.withStringMatcher(ExampleMatcher.StringMatcher.CONTAINING)
.withStringMatcher(ExampleMatcher.StringMatcher.STARTING)
.withIgnoreNullValues();
Example example = Example.of(book, exampleMatcher);
List<Book> all = bookRepository.findAll(example);
But when i get the results set, the results are sorted according to the material id. And records that have attributes matching almost all the fields are also there, but sorted according to the id.
Is there a way for me to sort the results in a way that the most matching records are in the first few records in the list and then the other records? As in, to sort from most matching to least matching?
As far as i understood, JpaSort allows ascending and descending sorting and also we can allow specific sorting for specific attributes.
But in the advanced search, the searching is done dynamically according to the attributes that the user fills in. Therefore, i cannot program which fields of the table to sort right? For example, if i program the book name field to be sorted in ascending order and if the user did not specify any value for that particular field, then sorting under that field is useless right?
That is why i want to know if there is any way to dynamically sort the results from most matching to least matching. Any way of achieving this task is much appreciated. Thank you.
After two whole days of reading more than 50-70 articles and posts on the Internet, i was able to implement the Advanced Search in a more optimized manner.
I was not able to find how to sort the results obtained from most-matching to least-matching as i originally asked in the question. So if someone can still answer my original question, i am happy to accept.
The workaround i used is as follows.
From an idea i got to dynamically generate the SQL query, i was able to find a lead and referred articles on that.
In Dynamic Query in Spring Boot, the author has used Java Reflection API to manually go through the non-null fields of the entity class and to generate the SQL query. But as we all know, when you are using Springboot and when all the configurations are done for you by Springboot, i don't think it is really an effective way to have the Hibernate dependency explicitly, to manage sessions and run your SQL query. The HibernateJpaSessionFactoryBean used in the above article is now deprecated. Therefore, i referred various articles and the Spring Data Jpa Documentation but could not resolve the error that i always got saying that Springboot cannot find the entityManagerFactory bean.
Therefore, i searched for ways to dynamically generate queries using Spring Data JPA itself and not use Hibernate and facing a hassle on session managing etc. Dynamic Queries with Spring Data JPA Specifications and Using Spring Data JPA Specification has enough information on how to implement JpaSpecification in order to generate queries dynamically in Springboot.
So at the end, i used information from all these 3 articles sited here to come up with my implementation. I used Java Reflection to create the Specification according to the Class type of the non-null fields in my entity object.
The new part i added by myself was, i grouped all the separate Specifications together to a List, and wrote a loop to dynamically generate the final Specification to be used in retrieving the data. It is as follows.
List<BookSpecification> bookSpecifications = createDynamicQuery(book);
if (bookSpecifications.size() != 0) {
Specification<Book> dynamicQuery = Specification.where(bookSpecifications.get(0));
for (int i = 1; i < bookSpecifications.size(); i++) {
dynamicQuery = dynamicQuery.or(bookSpecifications.get(i));
}
List<Book> all = bookRepository.findAll(dynamicQuery);
all.forEach(System.out::println);
return all;
}
The createDynamicQuery() method above, which i used in my own way is inspired from the information in the cited articles.
Using this way, i was able to obtain much more accurate Advanced Search results rather than using ExampleMatcher for the same advanced search criteria. And since i am searching by specific field names, the search results were also sorted in an accurate way.
How to generate MySQL Querys with LUIS and fetch data from the DB hosted in Azure?
Should generate a natural language query to an MySQL Query.
e.g.
How much beer was drunken on the oktoberfest 2018?
--> GET amountOfBeer FROM Oktoberfest WHERE Year ==2018;
Does anyone has an idea how to get this to work?
Already generated small Intents in LUIS e.g. GetAmountOfBeer
Dont know how to generate the MySQL Statements and how to get the data from the DB.
Thanks.
You should be able to achieve this, or something similar, using intents and entities. How successful this can be depends on how many and how diverse your queries need to be. First lets start with the phrase you mentioned: "How much beer was drunken on the oktoberfest 2018". You can easily (as you've done) add this as an utterance for an intent, GetAmountOfBeer. Though I'm a fan of intent names that you can read as "I want to GetAmountOfBeer", here you may want to name the intent amountOfBeer so you can use it in your query directly.
Next you need to set up you entities. For year (or datetime rather) that should be easy, as I believe there are some predefined entities for this. I think you need to use a datetime recognizer to parse out the right attribute (like year), but I haven't tried to do this before. Next, Oktoberfest seems to be a specific holiday or event in your DB, so you could create a list entity of all the events you have.
What you are left with is something like (pseudocode) GET topIntent FROM eventEntity WHERE Year ==datetime.Year, or something like that.
If your query set is more complex, you might have to have multiple GET statements, but you could put those in a switch statement by topIntent so that, no matter what the intent is, you can parse out the correct values. You also might want to build this into a dialog where you can check if the entities exist, and if not, you can prompt the user for the missing data.
I'm and intern and I've been tasked with something I'm pretty unfamiliar with. My manager has requested I create a simple MySQL database using data from an Excel file(s) and I have no idea where to start. I would normally ask someone here for help but everyone seems to be really busy. Basically, the purpose of the database is to see what different object-groups relate to one another so as to keep things standardized. Trying not to go into detail about things not really relevant.
I was asked to first design a schema for the database and then I would get an update on how to implement it. Would I just start by writing queries to create tables? I'm assuming I would need to convert the Excel files to .csv, how do I read this data and send it to the correct table based on Object Type (an attribute of each object, represented in a column)?
I don't want to ask too much right now, but if someone could help me understand what I need to do to get started I would really appreciate it.
Look at the column headers in your spread sheet.
Decide which columns relate to Objects and which columns relate to Groups
The columns that relate to just Objects will become your field names for the Object table. Give this table an ID field so you can uniquely identify each Object.
The columns that relate to the Groups will become field names for a Group table. Give this table an ID field so you can uniquely identify each Group.
Think about if an Object can be in more than one Group - if so you will probably need an Object-Group table. This table would most likely contain an ObjectID and a GroupID.
I'm fairly new to Tridion and I have to implement functionality that will allow a content editor to create a component and assign multiple date ranges (available dates) to it. These will need to be queried from the broker to provide a search functionality.
Originally, this was only require a single start and end date and so were implemented as individual meta data fields.
I am proposing to use an embedded schema within the schema's 'available dates' metadata field to allow multiple start and end dates to be assigned.
However, as the field is now allowing multiple values, the data is stored in the broker as comma separated values in the 'KEY_STRING_VALUE' column rather than as a date value in the 'KEY_DATE_VALUE' column as it was when it was only allowed a single start and end values.
eg.
KEY_NAME | KEY_STRING_VALUE
end_date | 2012-04-30T13:41:00, 2012-06-30T13:41:00
start_date | 2012-04-21T13:41:00, 2012-06-01T13:41:00
This is now causing issues with my broker querying as I can no longer use simple query logic to retrieve the items I require for the search based on the dates.
Before I start to write C# logic to parse these comma separated dates and search based on those, I was wondering if anyone had had similar requirements/experiences in the past and had implemented this in a different way to reduce the amount of code parsing required and to use the broker querying to complete the search.
I'm developing this on Tridion 2009 but using the 5.3 Broker (for legacy reasons) so the query currently looks like this (for the single start/end dates):
query.SetCustomMetaQuery((KEY_NAME='end_date' AND KEY_DATE_VALUE>'" + startDateStr + "') AND (ITEM_ID IN(SELECT ITEM_ID FROM CUSTOM_META WHERE KEY_NAME='start_date' AND KEY_DATE_VALUE<'" + endDateStr + "')))";
Any help is greatly appreciated.
Just wanted to come back and give some details on how I finally approached this should anyone else face the same scenario.
I proposed the set number of fields to the client (as suggested by Miguel) but the client wasn't happy with that level of restriction.
Therefore, I ended up implementing the embeddable schema containing the start and end dates which gave most flexibility. However, limitations in the Broker API meant that I had to access the Broker DB directly - not ideal, but the client has agreed to the approach to get the functionality required. Obviously this would need to be revisited should any upgrades be made in the future.
All the processing of dates and the available periods were done in C# which means the performance of the solution is actually pretty good.
One thing that I did discover that caused some issues was that if you have multiple values for the field using the embedded schema (ie in this case, multiple start and end dates) then the meta data is stored in the KEY_STRING_VALUE column in the CUSTOM_META table. However, if you only have a single value in the field (i.e. one start and end date) then these are stored as dates in the KEY_DATE_VALUE column in the same way as if you'd just used single fields rather than an embeddable schema. It seems a sensible approach for Tridion to take but it serves to make it slightly more complicated when writing the queries and the parsing code!
This is a complex scenario, as you will have to go throughout all the DCPs and parse those strings to determine if match the search criteria
There is a way you could convert that metadata (comma separated) in single values in the broker, but the name of the fields need to be different Range1, Range2, ...., RangeN
You can do that with a deployer extension where you change the XML Structure of the package and convert each those strings in different values (1,2, .., n).
This extension can take some time if you are not familiar with deployer extensions and doesn't solve 100% your scenario.
The problem of this is that you still have to apply several conditions for retrieve those values and there is always a limit you have to set (Versus the User that can add as may values as wants)
Sample:
query.SetCustomMetaQuery((KEY_NAME='end_date1'
query.SetCustomMetaQuery((KEY_NAME='end_date2'
query.SetCustomMetaQuery((KEY_NAME='end_date3'
query.SetCustomMetaQuery((KEY_NAME='end_date4'
Probably the fastest and easiest way to achieve that is instead to use an multi-value field, use different fields. I understand that is not the most generic scenario and there are Business Requirements implications but can simplify the development.
My previous comments are in the context of use only the Broker API, but you can take advantage of a search engine if is part of your architecture.
You can index the Broker Database and massage the data.
Using the Search Engine API you can extract the ids of the Components/Component Templates and use the Broker API to retrieve the proper information
This question of mine is subjective
i am getting a list of objects from a third site.
now i want to save that data in database.
suppose the data is List. This response is to a query that i fired to that site .
now i want to save two things
1) query name
2) the response(List) (answer)
the myobject can have lot of answers corresponding to my query. now i want to save all these answers separately so that each answer can be fetched independently.
now i have this DB approach
one table for query and query id
second table which will consist of query id and query answer. (which will be foreigen key in first table
My question is am i following right approach?
initially i thought of saving the whole list in database but as per my knowledge we can not save list in database directly although in jpa implementation 2.0 we can save list in db (correct me if i am wrong)
please guide me with my current approach or of there is any better approach
i am using JPA 2.0 eclipselink.
Regards
Anil Sharma
What is your object model?
You can use OneToMany or ManyToMany to store a collection of Entity objects.
If you have a List or List you can store this using an ElementCollection.
But you may be better off creating an Answer or AnswerReference Entity.
See,
http://en.wikibooks.org/wiki/Java_Persistence/ElementCollection