Person schema is not detecting into Rich Results Test? - json

I have used Person schema on my site as per schema.org standard but the rich result testing tool is not detecting the person schema. As the older Structure data testing tool detects the person schema.
Is the person schema is supported by Google or not?
Person schema schema.org:- https://schema.org/Person
Testing screenshot:- https://a.cl.ly/E0ur0e69

The Rich Results Test only reports top level entities that generate Rich Results in Google. Person is not one of them. Person is a supported entity, but only used inside other entities, like an author of a review.

I faced the same issue with "Person" schema type while trying to test my markup in Google's Rich Results Test tool. After spending much time online to find out the issue, came across a very good article that gives the answer to this issue. Apparently, not all schema types are supported in Google's Rich Test Result tool.
The following is a list of Supported rich result types that the Google's tool currently supports as mentioned in their Seach Console Help Page.
AMP article
Article
Breadcrumb
Carousel
Course
Critic Review
Dataset
Employer rating
Estimated salary
Event
FAQ
Fact check
Guided recipe
How-to
Image License
Job posting
Job training
Local business
Logo
Movie
Product
Q&A page
Recipe
Review snippet
Sitelinks searchbox
Software app
Special Announcement
Video
As you can see, schema type "Person" is not in the supported list.
Thanks to this wonderful article SEO for 2021: How to Use Google's New Testing Tool for Structured Data written by Anne Fernandez that has some additional tips.

Related

How to train Document AI to get specific fields?

I have approximately 6000 documents in pdf format, they have a different structure but inside they all contain the same date and code (With different structure I mean that the location of these values ​​changes in each document) I am working with Document IA that extracts all the information, but I would like to know if there is a possibility to only extract the fields that I need. Would Document IA workbench be the best option?
Did you mean creating a custom document extractor? You can do this in Document AI, Visit this link for this feature.
Tldr; you will have to do this on Document AI's workbench and train your own extractor(Uploading files and train the processor to extract data specified) For steps on this feature, I would suggest to visit this documentation for the detailed steps on this.
Also please be noted that this feature is on the Preview stage at the moment. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these.

How to list Apply Pay as a payment option in Schema.org?

https://schema.org/PaymentMethod
A payment method is a standardized procedure for transferring the monetary amount for a purchase. Payment methods are characterized by the legal and technical structures used, and by the organization or group carrying out the transaction.
It's mentioned on Apple's documentation:
Inform search engines that Apple Pay is accepted on your website. If your website uses semantic markup to provide product details to search engines, list Apple Pay as a payment option.
I'm wondering how to list Apple Pay in my application?
Unfortunately, there is currently no standard way to add ApplePay markup as per schema.org documentation.
However, the values mentioned there are recommended. And you can try adding your own value (and replace it later when the standard value is available).
Just note it should be an url.
I've tested the Google's Rich Results Test tool and custom ApplePay value passes the test successfully.

Visualizing chatbot structure

I have created a chatbot using Snatchbot for the purpose of a quiz. I have been asked to create a dynamic decision tree structure for the chatbot which must be displayed on the web page, i.e. everytime the user answers a question, a branch on the tree must be created according the user's response. Is there anyway to do this? Is it possible to generate the JSON for the structure of the chatbot rather than the JSON for previous conversations? Would any other platform such as dialogflow be more suitable?
I am also using SnatchBot, you will need to use the NLP section to create all your samples and train your Data, then you could add global connections, Giving the possibility to direct the bot to the needed subject at any point of the conversation.
The value of this tool is that it allows the user to immediately (and at any point in the conversation) direct the bot to a particular subject.
Technical perspective, I have some recommendations for you:
https://jorin.me/chatbots.pdf (Development and Applications)
https://www.researchgate.net/publication/325607065_Implementation_of_a_Chat_Bot_System_using_AI_and_NLP (Implementation Using AI And NLP)
Strategy perspective, here are the crucial 6 different main criteria for enterprise chatbot implementation success:
Defining clear audience profiles of the project
Identifying clear goal for the project
Defining clear Dialog-flow Key Intents Related
Platform’s Customer Experience SWOT assessment Forming coherent teams
Testing and involving the audience from early on in the validation of
the project
Implementing feedback analytics to be used as basis for
continuous improvement
(Source: http://athenka.com)

API call - the SMT category

I have recently tried to review the Chinese -> English system. According to https://blogs.msdn.microsoft.com/translation/2017/11/15/microsoft-translator-accelerates-use-of-neural-networks-across-its-offerings/ , those systems were already switched to NMT models. There is also statement, that user can still use the statistical system when setting category to "SMT".
However the https://blogs.msdn.microsoft.com/translation/2016/01/27/new-microsoft-translator-customization-features-help-unleash-the-power-of-artificial-intelligence-for-everyone/ mentions there were actually three standard categories available for SMT engines: General(default), TECH, SPEECH.
Could you please explain which domain is offered by the SMT category now? And for how long it will be supported on your side?
Thanks
We are working on customizaton using a neural network decoder. Currently, the Microsoft Translator Hub has 3 Category IDs for SMT and they are general, tech and speech.
With content that is not narrowly confined to your domain, you may find it to be better using category=generalnn than your current customization.
Chinese is using the NMT system so using Category=generalnn would result in the same translation when calling the service using the Microsoft Translator Text API.
The second article is addressing Customization where you can create your own custom translation system or dictionary tuned to your domain, style and terminology. If you're interested in customization (SMT at this time), there are categories associated with using the Translator Text API and the Microsoft Translator Hub. The category identifies the domain for the project you create using the Hub. Two of the categories are Tech and Speech.
See the Microsoft Translator Hub User Guide to learn more about the Hub.
The tech category will produce different results only when translating FROM English to other languages. In the case of English>Chinese, with my sample sentence "My computer doesn't boot up.", it does. For Chinese>English, specifying "tech" will fall back to the default, which is neural in the case of Chinese<>English. "speech" generates the same results as "generalnn" in all cases.
It is generally true, including for Hub categories, that a category that is valid in one language pair is valid in all language pairs. The API will fail with an "invalid category" error only if that category doesn't exist at all. The reason for this design is so that you can build your custom systems out language by language, over time, while still allowing the user to choose between all available languages, at the cost of, maybe, occasionally suboptimal domain vocabulary in an as of yet uncustomized language pair.
The API does not return to you whether a customized system was used or not. A trick to get that feature anyway is to watermark your custom system using a dictionary entry. Make a dictionary entry "_mywatermark" that translates to "CustomSystem180309_1700_en_ru" for instance, and then you can test anytime, in any application, whether you are getting your custom system or not.

GEDCOM to HTML and RDF

I was wondering if anyone knew of an application that would take a GEDCOM genealogy file and convert it to HTML format for viewing and publishing on the web. I'd like to have separate html files for each individual and perhaps additional files for other content as well. I know there are some tools out there but I was wondering if anyone used any tools and could advise on this. I'm not sure what format to look for such applications. They could be Python or php files that one can edit, or even JavaScript (maybe) or just executable files.
The next issue might be appropriate for a topic in itself. Export of GEDCOM to RDF. My interest here would be to align the information with specific vocabularies, such as BIO or REL which both are extended from FOAF.
Thanks,
Bruce
Like Rob Kam said, Ged2Html was the most popular such program for a long time.
GRAMPS can also create static HTML sites and has the advantage of being free software and having a native XML format which you could easily modify to fit your needs.
Several years ago, I created a simple Java program to turn gedcom into xml. I then used xslt to generate html and rdf. The html I generate is pretty rudimentary, so it would probably be better to look elsewhere for that, but the rdf might be useful to you:
http://jay.askren.net/Projects/SemWeb/
There are a number of these. All listed at http://www.cyndislist.com/gedcom/gedcom-to-web-page-conversion/
Ged2html used to be the most popular and most versatile, but is now no longer being developed. It's an executable, with output customisable through its own scripting syntax.
Family Historian http://www.family-historian.co.uk will create exactly what you are looking for, eg one file per person using the built in Web Site creator. As will a couple of the other Major genealogy packages. I have not seen anything for the RDF part of your question.
I have since tried to produce a Genealogy application using Semantic MediaWiki - MediaWiki, the software behind Wikipedia, and Semantic MediaWiki includes various extensions related to the Semantic Web. I thought it is very easy to use with the forms and the ability to upload a GEDCOM but some feedback from people into genealogy said that it appeared too technical and didn't seem to offer anything new.
So, now the issue is whether to stay with MediaWiki and make it more user friendly or create an entirely new application that allows for adding and updating data in a triple store as well as displaying. I'm not sure how to generate a family tree graphical view of the data, like on sites like ancestry.com, where one can click on a box to see details about the person and update that info or one could click on a right or left arrow around a box to navigate the tree. The data comes from SPARQL queries sent to the data set/triple store both when displaying the initial view and when navigating the tree, where an Ajax call is needed to get more data.
Bruce