How to send data from one digital twin to another? - bosch-iot-suite

Let's assume
I have created two digital twins in bosch-iot-things.
ENGINE TWIN:
Digital twin for Engine.(engine temperature, oil level, etc)
VEHICLE TWIN:
Digital twin for Entire vehicle.(contains location, speed, engine temperature,
etc)
Created a connection between bosch-iot-things and sensors using bosch-iot-hub.
Now engine temperature sensor, sends data to ENGINE TWIN, and i want this data to be automatically updated in VEHICLE TWIN.
Question #1:
so is it possible to achieve one digital twin(ENGINE TWIN) can send data to another digital twin(VEHICLE TWIN)?
Question #2:
If yes how to configure it to send data automatically once it receives data from sensor?

No, it's not possible out-of-the-box.
From a "digital twin" pattern, one digital twin mirrors one physical device/asset.
I would suggest that to model your twins close to the real world devices - your most powerful device aggregating different capabilities would be the vehicle.
The "vehicle" twin contains (amonst others) an "engine" feature.
If you want to stick to the "several twins per physical asset" pattern you are on your own in syncing/copying the data from one twin to another.

Related

Can I apply different styles to columns in column-count?

Is there any way to add different styles for columns made with column-count? I have a div which is divided into multiple columns using column-count. At a time only two columns are visible on page. I need to add margin-left to the first column and margin-right for the second column and so on.
What I need is the same spacing on both (outer and inner) sides of the pages just like book.
.main {
overflow: scroll;
width: 100%;
height: 438px;
column-gap: 160px;
columns: 2 auto;
column-fill: auto;
margin-top: 5px;
}
<div class="main">
Wikidata is a free, collaborative, multilingual, secondary database, collecting structured data to provide support for Wikipedia, Wikimedia Commons, the other wikis of the Wikimedia movement, and to anyone in the world. What does this mean? Let's look
at the opening statement in more detail: Contents 1 What does this mean? 2 How does Wikidata work? 2.1 The Wikidata repository 2.2 Working with Wikidata 3 Where to get started 4 How can I contribute? 5 There is more to come Free. The data in Wikidata
is published under the Creative Commons Public Domain Dedication 1.0, allowing the reuse of the data in many different scenarios. You can copy, modify, distribute and perform the data, even for commercial purposes, without asking for permission. Collaborative.
Data is entered and maintained by Wikidata editors, who decide on the rules of content creation and management. Automated bots also enter data into Wikidata. Multilingual. Editing, consuming, browsing, and reusing the data is fully multilingual. Data
entered in any language is immediately available in all other languages. Editing in any language is possible and encouraged. A secondary database. Wikidata records not just statements, but also their sources, and connections to other databases. This
reflects the diversity of knowledge available and supports the notion of verifiability. Collecting structured data. Imposing a high degree of structured organization allows for easy reuse of data by Wikimedia projects and third parties, and enables
computers to process and “understand” it. Support for Wikimedia wikis. Wikidata assists Wikipedia with more easily maintainable information boxes and links to other languages, thus reducing editing workload while improving quality. Updates in one language
are made available to all other languages. Anyone in the world. Anyone can use Wikidata for any number of different ways by using its application programming interface. How does Wikidata work? This diagram of a Wikidata item shows you the most important
terms in Wikidata. Wikidata is a central storage repository that can be accessed by others, such as the wikis maintained by the Wikimedia Foundation. Content loaded dynamically from Wikidata does not need to be maintained in each individual wiki project.
For example, statistics, dates, locations and other common data can be centralized in Wikidata. The Wikidata repository Items and their data are interconnected. The Wikidata repository consists mainly of items, each one having a label, a description
and any number of aliases. Items are uniquely identified by a Q followed by a number, such as Douglas Adams (Q42). Statements describe detailed characteristics of an Item and consist of a property and a value. Properties in Wikidata have a P followed
by a number, such as with educated at (P69). For a person, you can add a property to specifying where they were educated, by specifying a value for a school. For buildings, you can assign geographic coordinates properties by specifying longitude and
latitude values. Properties can also link to external databases. A property that links an item to an external database, such as an authority control database used by libraries and archives, is called an identifier. Special Sitelinks connect an item
to corresponding content on client wikis, such as Wikipedia, Wikibooks or Wikiquote. All this information can be displayed in any language, even if the data originated in a different language. When accessing these values, client wikis will show the
most up-to-date data. Item Property Value Q42 P69 Q691283 Douglas Adams educated at St John's College Working with Wikidata There are a number of ways to access Wikidata using built-in tools, external tools, or programming interfaces. Wikidata Query
and Reasonator are some of the popular tools to search for and examine Wikidata items. The tools page has an extensive list of interesting projects to explore. Client wikis can access data for their pages using a Lua Scribunto interface. You can retrieve
all data independently using the Wikidata API. Where to get started The Wikidata tours designed for new users are the best place to learn more about Wikidata. Some links to get started: Set your user options, especially the 'Babel' extension, to choose
your language preferences Help with missing labels and descriptions Help with interwiki conflicts and constraint violations Improve a random item Help translating How can I contribute? Go ahead and start editing. Editing is the best way to learn about
the structure and concepts of Wikidata. If you would like to gain understanding of Wikidata's concepts upfront, you may want to have a look at the help pages. If you have questions, please feel free to drop them in the project chat or contact the development
team. There is more to come Wikidata is an ongoing project that is under active development. More data types as well as extensions will be available in the future. You can find more information about Wikidata and its ongoing development on the Wikidata
page on Meta. Subscribe to the the Wikidata mailing list to receive up-to-date information about the development and to participate in discussions about the future of the project. North Korea conducted its sixth nuclear test on 3 September 2017, according
to Japanese and South Korean officials. The Japanese Ministry of Foreign Affairs also concluded that North Korea conducted a nuclear test.[6] The United States Geological Survey reported an earthquake of 6.3-magnitude not far from North Korea's Punggye-ri
nuclear test site.[7] South Korean authorities said the earthquake seemed to be artificial, consistent with a nuclear test.[6] The USGS, as well as China's earthquake administration, reported that the initial event was followed by a second, smaller,
earthquake at the site, several minutes later, which was characterized as a collapse of the cavity.[8][9] North Korea claimed that it detonated a hydrogen bomb that can be loaded on to an intercontinental ballistic missile (ICBM) with great destructive
power.[10] Photos of North Korean leader Kim Jong-un inspecting a device resembling a thermonuclear weapon warhead were released a few hours before the test.[11] Contents 1 Yield estimates 2 Reactions 3 See also 4 References Yield estimates According
to estimates of Kim Young-Woo, the chief of the South Korean parliament's defense committee, the nuclear yield was equivalent to about 100 kilotons of TNT (100 kt). "The North's latest test is estimated to have a yield of up to 100 kilotons, though
it is a provisional report," Kim Young-Woo told Yonhap News Agency.[2] On 3 September, South Korea’s weather agency, the Korea Meteorological Administration, estimated that the nuclear blast yield of the presumed test was between 50 to 60 kilotons.[3]
On 4 September, the academics from University of Science and Technology of China[12] have released their findings based on seismic results and concluded that the Nuclear Test Location is at 41°17′53.52″N 129°4′27.12″E on 03:30 UTC which is only a few
hundred meters apart from the previous 4 tests (2009, 2013, January 2016 and September 2016) with the estimated yield at 108.1 ± 48.1 kt. In contrast, the independent seismic monitoring agency NORSAR estimated that the blast had a yield of about 120
kilotons, based on a seismic magnitude of 5.8.[4] The Federal Institute for Geosciences and Natural Resources in Germany estimates a higher yield at "a few hundred kilotons" based on a detected tremor of 6.1 magnitude.[5] Reactions South Korea, China,
Japan, Russia and members of the ASEAN[13] voiced strong criticism of the nuclear test.[14] US President Donald Trump tweeted "North Korea has conducted a major Nuclear Test. Their words and actions continue to be very hostile and dangerous to the United
States".[15][16] Trump was asked whether the US would attack North Korea and replied, "We'll see".[17] On September 3, U.S. Defense Secretary James Mattis warned North Korea, saying that the country would be met with a "massive military response" if
it threatened the United States or its allies.[18] The United Nations Security Council will meet in an open emergency meeting on September 4, 2017 at the request of the US, South Korea, Japan, France and the UK.[19] Federal Institute for Geosciences
and Natural Resources From Wikipedia, the free encyclopedia (Redirected from Bundesanstalt für Geowissenschaften und Rohstoffe) Federal Institute for Geosciences and Natural Resources Bundesanstalt für Geowissenschaften und Rohstoffe (BGR) Agency overview
Headquarters Hanover, Germany Employees 795 in 2013 Website www.bgr.bund.de The Federal Institute for Geosciences and Natural Resources (Bundesanstalt für Geowissenschaften und Rohstoffe or BGR) is a German agency within the Federal Ministry of Economics
and Technology. It acts as a central geoscience consulting institution for the German federal government.[1] The headquarters of the agency is located in Hanover and there is a branch in Berlin. Early 2013, the BGR employed a total of 795 employees.
The BGR, the State Authority for Mining, Energy and Geology and the Leibniz Institute for Applied Geophysics form the Geozentrum Hanover. All three institutions have a common management and infrastructure, and complement each other through their interdisciplinary
expertise.
</div>
Here is the JSFiddle for testing link

Is there an API for crime in the US?

As the question title states, I need the crime data from all the united states. I can't find a dataset for that, only numerous small ones for the different cities and countrysides.
Is there such a united API or should I maintain these small ones as well ?
There is currently no single open dataset or API (Socrata maintained or otherwise) that covers the US completely. Many cities publish crime reports to their open data portals, but the coverage is still pretty sparse.
There's also the FBI Uniform Crime Reporting datasets, but those are aggregated at the city level (which again is somewhat sparse) and the most recent data is a partial update from the first half of 2015.

Is it possible to obtain the coordinates of water bodies in Google Maps?

I'm in need of acquiring the coordinates of the outlines of all the water bodies inside a country, with the exception of "Sea" or "Ocean" water. Right now, I'm manually outlining the lakes and rivers but it is not a sustainable solution for the magnitude of the application I'm developing.
Even if I can only obtain the data of Lakes or Rivers, that would be a great start. I'm specifically interested in the countries of Malaysia, Brazil and the Dominican Republic.
My situation brings me to the question of, where does Google obtain its data? Are these data sets available?
Google gets this data usually from TomTom, (former TeleAtlas).
The coordinate polygons of that data is not available, at least not without paying much money.
This data is usually extracted from aerial fotos.
For research projects it might be possible to ask TomTom via your University.
An alternative professional quality source is the product NavStreets from Here (former Nokia).
For free you could try OpenStreetMap. You would get coordinates.
Unfortunateley the OpenStreetMap data is not always clean or closed polygons.
The quality depens much on the country. You can check the countries
first by looking in the web browser: https://www.openstreetmap.org/relation/57963
Geofabrik.de provides OpenStreetMap data converisons and extractions of specific countries, e.g in pbf and shp file format, you might check this to.
read further here:
http://wiki.openstreetmap.org/wiki/Waterways

How to create indeed.com like search?

If you have used indeed.com before, you may know that for the keywords you look for, it returns a traditional search results as long as multiple search refinement options on the left side of screen.
For example, searching for keyword "designer", the refinement options are:
Salary Estimate
$40,000+ (45982)
$60,000+ (29795)
$80,000+ (15966)
$100,000+ (6896)
$120,000+ (2828)
Title
Floral Design Specialist (945)
Hair Stylist (817)
GRAPHIC DESIGNER (630)
Hourly Associates/Co-managers (589)
Web designer (584)
more »
Company
Kelly Services (1862)
Unlisted Company (1133)
CyberCoders Engineering (1058)
Michaels Arts & Crafts (947)
ULTA (818)
Elance (767)
Location
New York, NY (2960)
San Francisco, CA (1633)
Chicago, IL (1184)
Houston, TX (1057)
Seattle, WA (1025)
more »
Job Type
Full-time (45687)
Part-time (2196)
Contract (8204)
Internship (720)
Temporary (1093)
How does it gather statistics information so quickly (e.g. the number of job offers in each salary range). Looks like the refinement options are created in realtime since minor keywords load fast too.
Is there a specific SQL technique to create such feature? Or is there a manual on the web explaining the tech behind this?
The technology used in Indeed.com and other search engines is known as inverted indexing which is at the core of how search engines work (e.g Google). The filtering you refer to ("refinement options") are known as facets.
You can use Apache Solr, a full-fledged search server built using Lucene and easily integrable into your application using its RESTful API. Comes out-of-the-box with several features such as faceting, caching, scaling, spell-checking, etc. Is also used by several sites such as Netflix, C-Net, AOL etc. - hence stable, scalable and battle-tested.
If you want to dig deep into facet based filtering works, lookup Bitsets/Bitarrays and is described in this article.
Why do you think that they load "too fast"? They certainly have nice, scaled architecture, they use caching for sure, they might be using some denormalized datastore to accelerate some computations and queries.
Take a look at google and number of web pages worldwide - you also think that google works too fast?
In addition to what Mios said and as Daimon mentioned it does use a denormalized doc store. Here is a link to Indeed's tech talk about its docstore
http://engineering.indeed.com/blog/2013/03/indeedeng-from-1-to-1-billion-video/
Also another related article on their Engineering blog:
http://engineering.indeed.com/blog/2013/10/serving-over-1-billion-documents-per-day-with-docstore-v2/

What "other features" could be incorporated into a train database?

This is a mini project for DBMS course. My task is to develop a Database for management of passenger trains.
I'm designing tables for Customers, Trains, Ticket Booking (via Telephone & Internet), Origins and Destinations.
He said, we are free to incorporate other features in our Database Model. Some of the features that we can include are as listed:
Ad-hoc Querying
Data Mining
Demographic Passenger Mapping
Origin and Destination Mapping
I've no clue about what these features mean. I know about datamining but unable to apply it in this context. Can any one kindly expand these features or suggest new ideas?
EDIT: What is Ad-hoc Querying? Give an example in this context.
Data mining would incorporate extracting useful facts/figures out of the data obtained by your system & stored in the database. For example, data mining might discover that trains between city x and y are always 5 minutes late, or is never at more than 50% capacity, etc. So you may wish to develop some tools or scripts that automatically run and generate statistics (graphs are best) which display this information and highlight unusual trends. In the given example, the schedulers could then analyse why the trains are always late (e.g., maybe the train speedos are wrong?).
Both points 3. and 4. are a subset of data mining imo. There is a huge amount of metrics you could try to measure, it is just really whatever you can think of. If you specify what type of data you are going to collect, that will make making suggestions easier.
Basically, data mining just means "sort the data to find interesting facts".
Based on comment below you could look for,
% of internet vs. phone sales
popular destinations & origins
customers age/sex/location
usage vs. time of day
...