StorageProvider: What is populating and what does PopulationPolicy do? - windows-runtime

I'm reading up on Cloud file storage, and ran across the PopulationPolicy property under Storage.Provider.StorageProviderSyncRootInfo, but I'm not sure what this does. The definition that msdn provided to just cut off. Under the Fields section, AlwaysFull sounds similar to how the first part of HydrationPolicyModifier's ValidationRequired field works ("it guarantees that the data returned by the sync provider is always persisted to the disk prior to it being returned to the user application"). I believe that hydration fills the placeholder object with the correct data from the cloud (correct me), but I'm confused about what populate does.
What is populating?
What does changing the PopulationPolicy to Full and AlwaysFull do?

Population is about files and folders (placeholders), not their content (Hydration).
If you don't use AlwaysFull (so the only valid value left is Full), the platform will call your engine back with CF_CALLBACK_TYPE_FETCH_PLACEHOLDERS, otherwise this type of callback will not be used.

Related

Disable Search Parameters

I have an instance of Microsoft FHIR Server, and I would like to disable some of the search parameters. Can I do this by updating the SearchParameter resource and set its "status" to "retired," or do I need to add the parameter's URL to the unsupported-search-parameters list? The goal is to reduce the amount of search values index when our application does not use search-parameters.
P.S. It would be nice if the solution allows to re-activate the search param if needed (and perform $reindex).
Thanks!
There is currently no API level support to do this with the built-in FHIR parameters.
The values in unsupported-search-parameters are loaded into the database, after this they are tracked there. This is because over time the server may support new parameters that can't be turned on immediately as it would leave the indexes inconsistent.
In the Cosmos collection the status can be "Enabled", "Supported", "Disabled" and "Deleted". If Supported it will not be available for search, but will continue to be indexed.
When Disabled the server will recheck for support, I believe when set to Deleted is will no longer index the data.
To re-enable it could be set back to Supported.

Getting specific data from video surveillance web-interface in Zabbix

guys! I'm looking for a solution or some ideas on how to solve my task.
There is a video surveillance camera(vendor: Hikvision) with an accessible web-interface.
In the web-interface, there is a field Device Name containing data I need to retrieve by means of the Zabbix server and further to use this data for renaming discovered hosts.
Since Hikvision cameras support SNMP, I've tried the SNMP agent in Zabbix. I turned out that Hikvision MIB doesn't contain data from that field.
Also exploring web-interface through Developer tools in Google Chrome I stumbled upon the string Request URL: http://10.90.187.16/ISAPI/System/deviceInfo which gives such response in XML format:
<DeviceInfo xmlns="http://www.hikvision.com/ver20/XMLSchema" version="2.0">
<deviceName>1.5.1.1</deviceName>
<deviceID>566eec0b-6580-11b3-81a1-1868cb48861f</deviceID>
<deviceDescription>IPCamera</deviceDescription>
<deviceLocation>hangzhou</deviceLocation>
<systemContact>Hikvision.China</systemContact>
<model>DS-2CD2155FWD-IS</model>
<serialNumber>DS-2CD2155FWD-IS20170417AAWR749464587</serialNumber>
<macAddress>18:68:cb:48:86:1f</macAddress>
<firmwareVersion>V5.4.5</firmwareVersion>
<firmwareReleasedDate>build 170124</firmwareReleasedDate>
<encoderVersion>V7.3</encoderVersion>
<encoderReleasedDate>build 170123</encoderReleasedDate>
<bootVersion>V1.3.4</bootVersion>
<bootReleasedDate>100316</bootReleasedDate>
<hardwareVersion>0x0</hardwareVersion>
<deviceType>IPCamera</deviceType>
<telecontrolID>88</telecontrolID>
<supportBeep>false</supportBeep>
<supportVideoLoss>false</supportVideoLoss>
</DeviceInfo>
Where the tag <deviceName>1.5.1.1</deviceName> contains required data and now the question is how to put two and two together by means of Zabbix.
Digging into Zabbix documentation I've found an article about creating an Item based on HTTP agent with XML request . Unfortunately there are not any exmaples how to do it exactly.
Has somebody had such experience? Any clues will be helpful
You can create an HTTP Agent item, set it to TEXT type and point it to http://10.90.187.16/ISAPI/System/deviceInfo (don't forget the authentication, if required!), Zabbix will retrieve the full XML.
To get the desired value you have to create a dependent item, point it to the previous item and set up a preprocessing step.
Create a single XML Xpath preprocessing rule with parameter string(/DeviceInfo/DeviceName) to get the 1.5.1.1 value
If you want to get the firmware version, create another dependent item and set up the XPath to string(/DeviceInfo/FirmwareVersion) and so on for every element you need.
If you want a single value you can use a single item, adding the preprocessing rule to the http agent item. I use my solution for flexibility, maybe one day I'll need another XML element or maybe a firmware update will add some element to the page.
Dependent items are more flexible, but of course the full XML uses more storage in the database for stuff you don't need right now: it's a tradeoff, either way works!

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Databinding issue about updating data issue between MySQL database and my devexpress Gauge value?

Background:
Developing a trading system which subscribe to many events which are sent by API of Interactive Brokers. One interesting event is about my trading account value which fluctuates during trading hour so I would prefer to see the information with accountvalueupdate event immediately. I develop this one based on ActiveX api and c# in Visual Studio 2010.
the presentation I wanna check this information is to use a gauge developed by(http://www.devexpress.com/Products/NET/Controls/WinForms/Gauges/). This gauge looks fancy but the principle should be similar to the normal gauge we use in Visual Studio. It seems like I could only update the value of the gauge by databinding since I tried to assign updated account value to this.myGauge.value and failed.
I build up MySql connection between MySql and VS2010. I create only one table in MySQL which is called account. For the sake of simplicity, i only have two column(accountID and accountValue) and one row|(which means when event comes with new accountValue, I just overwrite the value of accountValue last session then the number of row is always one. really simple idea.....).In that Gauge proporties I found databinng option and I setted up by using advanced option to navigate throw available table and bind it to the only useful column accountValue.
Issue:
I set up the default value of the accountValue as 500 as default for test. I build my software. The gauge shows 500 correctly.
Of course, my real account value is not 500, so Now I click one button to connect to API and start listening the event. After few seconds, event arrives since I opened on Console for managing the mysql table and use select * from account to continuously watch the update. I noticed the value of accountValue column(TABLE WORKS RIGHT AND WE ONLY HAVE ONE ROW, OEVERWRITTING MODE) becomes the right one, for example, 35000.
HOWEVER, THE GAUGE DOES NOT CHANGE AT ALL...!! Now If i closed my software and build again, the gauge shows the right value 35000. Now I shut down the api and no coming event and only use commandline of mysql to change the value of accountValue again to 500. NO UPDATING in gauge as well.
It looks like the gauge only read the value of table during the build session or when it starts and never listen to the update of binding databases.
By the way, I tried to set up the biding data mode to either "onValidation" or "onPropertieschanged" but it does not solve although the "onPropertieschanged" looks the right one....
I tried to assign updated account value to this.myGauge.value...
Unfortunately, the information you provided doesn't allow to clearly diagnose this problem. There is no the Value property neither in WinForms GaugeControl nor in CircularGauge (as soon as in Linear/Digital/StateIndicator gauge), but only in ASPxGaugeControl (ASPxGaugeControl.Value). So, please provide a full sample code that doesnot work on your side.
All these properties can be changed manually in code or data-bound to data sources using the standard .NET data-binding mechanism:
The ArcScaleComponent.DataBindings property allows you to data-bind
to the current value of a circular gauge's scale (ArcScale.Value).
The LinearScaleComponent.DataBindings property allows you to
data-bind to the current value of a linear gauge's scale
(LinearScale.Value).
The DigitalGauge.DataBindings property allows you to data-bind to the
text displayed by a digital gauge.
The StateIndicatorComponent.DataBindings property allows you to
data-bind to a state of a state indicator gauge.
Please, review the following articles for more details: Data Binding.
The databinding feature is demonstrated in the Gauge's Main Demo project (the DataBinding module):
this.arcScaleComponent2.DataBindings.Add(
new System.Windows.Forms.Binding("Value", this.productsBindingSource,
"UnitsOnOrder", true, System.Windows.Forms.DataSourceUpdateMode.Never));
P.S. Please use the DevExpress Support Center to ask a questions or report issues, because there is no guarantee of DX involvement when you use the communities, newsgroups or other communication channels.

WinInet: Why does first ever HttpSendRequest take longer?

I promise this isn't as simple as it sounds. I'm wondering why the the first ever call to HttpSendRequest takes much longer than subsequent calls, even when the later requests are for a different URL. For example:
InternetConnect(... "foo.com" ...) // returns immediately
HttpOpenRequest(...) // returns immediately
HttpSendRequest(...) // takes ~3 sec
HttpSendRequest(...) // takes ~200 ms
InternetConnect(... "bar.com" ...) // returns immediately
HttpOpenRequest(...) // returns immediately
HttpSendRequest(...) // takes ~200 ms
Why does the first HttpSendRequest(...) take so much longer? This is very consistent, regardless of the URLs.
Thanks,
Greg
There are several things that may need to happen on the first request that don't need to happen on the second. DNS lookup and proxy detection immediately come to mind.
It could also be config file loading. Some of the .Net framework classes will attempt to read settings from the application config file, revertign to defaults if no file or setting is found. E.g. WebRequest/WebClient which I think are Http classes use under the hood will check for explicit web proxy settings, if those don't exist then then proxy setting from the OS (as set within IE) are picked up. Allof this contributes to an initial startup lag usually when the class is first used, that is, the work is often done in within a static contructor.
The config settings are defined here:
Configuration File Schema for the .NET Framework