Object.observe remove in chrome 50 - google-chrome

I had a warning message in Chrome, this one notice that Object.observe method is deprecated and will be removed in Chrome 50 around April 2016.
Have you an alternative solution to replace Object.observe ?
Thanks

So it was depricated and will be removed because of some problem with perfomance. Please see this link http://www.infoq.com/news/2015/11/object-observe-withdrawn
I think you should look into RxJS library and it's Observable
Using RxJS, you can represent multiple asynchronous data streams (that
come from diverse sources, e.g., stock quote, tweets, computer events,
web service requests, etc.), and subscribe to the event stream using
the Observer object. The Observable notifies the subscribed Observer
instance whenever an event occurs.
https://github.com/Reactive-Extensions/RxJS/

Related

Google Data Studio - When is resetAuth() being called?

I am having troubles with the resetAuth() function. I implemented it roughly like this example, but I have no idea when it is being called. Adding a console output and observing the Stackdriver Log tells me that this function is never being called during what I would call a normal workflow.
The documentation is weirdly brief and is missing this part about why I need to implement it and when it is being called. Do I need to call resetAuth() manually on some point? Is there a button somewhere that calls this function?
I'm using the AuthType USER_PASS by the way and everything else seems to work just fine after some investigation and debugging.
I found this document called Community Connectors Developer Launch where, among other things, the following it listed (as of 2018-07-30):
What's next: Upcoming changes and improvements
Some of the features and improvements we'll be working on in the
coming months include:
Configuration and Authentication
Capability to execute the resetAuth function of community connectors from within Data Studio.
Does this mean that calling resetAuth() is currently not yet implemented?
resetAuth is called when the user revokes access to the connector via the https://datastudio.google.com/datasources/create endpoint.
There was a bug that caused this function to not be called for certain auth types, but it has been resolved.

Global Feathers hooks and event filters

In Express, it's very easy to block access to all routes starting, say, with the /admin prefix, simply by adding a middleware to that path before adding handlers for any specific endpoints under that path.
In Feathers, it looks like we have to create some common hook modules and add them to every service we created, individually. Same goes for event filters.
I find the thought of forgetting to add an authentication hook or event filter scary, because I wouldn't notice the mistake unless I reviewed all service initialization code or got hacked. In that sense, an Express middleware with some sort of white-listing I can easily implement for exceptional endpoints gives me much more peace of mind.
Is it possible to do something like that in Feathers?
(P.S.: I just noticed I had protected my app's REST API, but had forgotten to protect all real-time events).
As of v1.6.0 feathers-hooks supports application hooks that run on all services via app.hooks:
app.hooks({
before(hook) {
console.log('Global before hook');
},
after(hook) {
console.log('Global after hook');
},
error(hook) {
console.error(`Error in ${hook.path} method ${hook.method}`, hook.error.stack);
}
});
For more examples see this blog post about error and application hooks.
As for the real-time events, channels are used which provide a safe way to send events only to the clients that should see them.

Logging to Event Viewer on Windows RT 8.1

I am working on an LOB (side-loading) app and I need to log events, crashes to ETW (Event Viewer logs). I see that most suggest writing own file IO wrapper.
With Windows 8.1, we have new logging capabilities in "Windows.Foundation.Diagnostics" which has classes for "LoggingChannel" and "LoggingSession". But the code sample for them still write to the isolated local storage as files:
http://code.msdn.microsoft.com/windowsapps/LoggingSession-Sample-ccd52336
Also, earlier than 8.1, we have "EventSource" and "EventListener" and as per a sample project (http://code.msdn.microsoft.com/windowsapps/Logging-Sample-for-Windows-0b9dffd7/sourcecode?fileId=67472&pathId=1214683397), it also writes to the sample isolated storage as files.
So, my questions are:
Can we utilize new "Windows.Foundation.Diagnostics" classes to write to ETW?
Are ("LoggingChannel" and "LoggingSession") equivalent to ("EventSource" and "EventListener") ultimately?
Will I still have to write C++ component for writing to ETW?
Forum of Microsoft just gave this answer:
It is not designed with such thing in mind.
I also tried using PInvoke for consuming EventRegister, EventWrite C++ functions. The code runs but I have no idea where find the log. The EventRegister only takes GUID as input and I don't know if it can be mapped to EventViewer application.
Short answer to the questions:
Windows.Foundation.Diagnostics.LoggingChannel writes events to ETW. However, it does not give you complete control over the event in the way that EventRegister/EventWrite do.
LoggingChannel is somewhat equivalent to .NET's EventSource. However, LoggingChannel always writes events to ETW, while EventSource can write to ETW but also has capabilities to bypass ETW. LoggingSession is similar in concept to EventListener, except that LoggingSession always receives events from ETW, while EventListener only works with EventSource (bypassing ETW). Note that you can use both LoggingChannel and EventSource in Windows Store apps.
You will have to write C++ code to use ETW if you need more capabilities than LoggingChannel or EventSource provides.
A few other comments based on things you mentioned:
Event Viewer shows data from the Event Log. The Event Log is not the same as ETW. Event Viewer records data from various sources, and ETW is one of the sources that Event Viewer supports. However, Event Viewer does not record all ETW events -- there are billions of ETW events every hour, and it would fill your hard disk if all of them were recorded. To send an ETW event to Event Viewer, you first have to make your event follow certain rules, and then you have to update the Event Viewer settings to watch for your specific event.
Event Log is designed to record events that are of interest to system administrators and system analysis tools. Because of this design, Microsoft requires administrator privileges to change the Event Log configuration. In order to have your events show up in Event Log, you need to have administrator privileges to change Event Log settings to make Event Log listen to your app's ETW events.
LoggingChannel does not support the necessary settings to make your ETW event look the way Event Log expects, so LoggingChannel cannot be used to write to the Event Log.
If you use EventRegister and EventWrite, you can write events in the format that Event Log expects, but you would still need to have administrator privileges to change Event Log settings to accept your events.
Note that EventRegister and EventWrite (and LoggingChannel) are for sending data to ETW. You can send anything you want to ETW, but by default ETW will just ignore it and throw it all away. ETW is the system for routing events from the provider to anybody who is interested in the event. If nobody is interested in the event, it gets thrown away by default.
LoggingChannel writes events out to ETW, but ETW will just drop them unless there is a session to record them. From within your app, you can record the events using LoggingSession. From outside your app, you can record the events using a tool such as xperf or tracelog.
You can use Windows.Foundation.Diagnostics.LoggingChannel from Windows 8.1 to write ETW events with some limitations. In particular: all events from all apps will always use the same provider GUID (4bd2826e-54a1-4ba9-bf63-92b73ea1ac4a), there is no way to access the keyword, channel, task, or opcode features of ETW, and you can only write very simple events. The Windows 8.1 LoggingChannel API is mainly focused on providing a simple string-based logging facility.
Windows 10 adds a bunch of new features, removing many of the limitations. You can use a different provider GUID (so it is easier to record just the events from your app), you can set keywords, tasks, and opcodes, and you can write strongly-typed events (i.e. events with strongly-typed field values instead of just a flat string). The Windows 10 LoggingChannel API allows you to use LoggingChannel for fairly advanced ETW scenarios, though it still works for simple logging.

Chrome extension to listen and capture streaming audio

Is it possible for a Chrome extension to listen for streaming audio from any of the browser's tabs? I would like to capture the streaming audio data and then analyse it.
Thanks
You could try 3 ways, neither one does provide 100% guarantee to meet your needs.
Before going into more detailed descriptions, I must note that Chrome extensions do not provide convenient tools for working on per connection level - sufficiently low level, required for stream capturing. This is by design. This is why the 1-st way is:
To look at other browsers, for example Firefox, which provides low-level APIs for connections. They are already known to be used by similar extensions. You may have a look at MediaStealer. If you do not have a specific requirement to build your system on Chrome, you should possibly move to Firefox.
You can develop a Chrome extension, which intercepts HTTP-requests by means of webRequest API, analyses their headers and extracts media urls (such as containing audio/mpeg MIME-type, for example, in HTTP-headers). Just for a quick example of code you make look at the following SO question - How to change response header in Chrome. Having the url you may force appropriate media download as a file. It will land in default downloads folder and may have unfriendly name. (I made such an extension, but I do not have requirements for further processing). If you need to further process such files, it can be a challenge to monitor them in the folder, and run additional analysis in a separate program.
You may have a look at NPAPI plugins in general, and their streaming APIs in particular. I can imagine that you create a plugin registered for, again, audio/mpeg MIME-type, and receives the data via NPP_NewStream, NPP_WriteReady and NPP_Write methods. The plugin can be wrapped into a Chrome extension. Though I made NPAPI plugins, I never used this API, and I'm not sure it will work as expected. Nethertheless, I'm mentioning this possibility here for completenees. This method requires some coding other than web-coding, meaning C/C++. NB. NPAPI plugins are deprecated and not supported in Chrome since September 2015.
Taking into account that you have some external (to the extension) "fingerprinting service" in mind, which sounds like an intelligent data processing, you may be interested in building all the system out of a browser. For example, you could, possibly, involve a HTTP-proxy, saving media from passing traffic.
If you're writing a Chrome extension, you can use the Chrome tabCapture API to record audio.
chrome.tabCapture.capture({audio: true}, function(stream) {
var recorder = new MediaRecorder(stream);
[...]
});
The rest is left as an exercise to the reader; MDN has more documentation on how to use MediaRecorder.
When this question was asked in 2013, neither chrome.tabCapture nor MediaRecorder existed.
Mac OSX solution using soundflower: http://rogueamoeba.com/freebies/soundflower/
After installing soundflower it should appear as a separate audio device in the sound preferences (apple > system preferences > sound). Divert the computer's audio to the 2ch option (stereo, 16ch is surround), then inside a DAW, such as 'audacity', set the audio input as soundflower. Now the sound should be channeled to your DAW ready for recording.
Note: having diverted the audio from the internal speakers to soundflower you will only be able to hear the audio if the 'soundflowerbed' app is actually open. You know it's open if there's a 8 legged blob in the top right task bar. Clicking this icon gives you the sound flower options.
My privoxy has the following log:
2013-08-28 18:25:27.953 00002f44 Request: api.audioaddict.com/v1/di/listener_sessions.jsonp?_method=POST&callback=_AudioAddict_WP_ListenerSession_create&listener_session%5Bid%5D=null&listener_session%5Bis_premium%5D=false&listener_session%5Bmember_id%5D=null&listener_session%5Bdevice_id%5D=6&listener_session%5Bchannel_id%5D=178&listener_session%5Bstream_set_key%5D=webplayer&_=1377699927926
2013-08-28 18:25:27.969 0000268c Request: api.audioaddict.com/v1/ping.jsonp?callback=_AudioAddict_WP_Ping__ping&_=1377699927928
2013-08-28 18:25:27.985 00002d48 Request: api.audioaddict.com/v1/di/track_history/channel/178.jsonp?callback=_AudioAddict_TrackHistory_Channel&_=1377699927942
2013-08-28 18:25:54.080 00003360 Request: pub7.di.fm/di_progressivepsy_aac?type=.flv
So I got the stream url and record it:
D:\Profiles\user\temp>wget pub7.di.fm/di_progressivepsy_aac?type=.flv
--18:26:32-- http://pub7.di.fm/di_progressivepsy_aac?type=.flv
=> `di_progressivepsy_aac#type=.flv'
Resolving pub7.di.fm... done.
Connecting to pub7.di.fm[67.221.255.50]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/x-flv]
[ <=> ] 1,234,151 8.96K/s
I got the file that can be reproduced in any multimedia pleer.

Stream Position Returned By Box API Cannot Be Used To Track Events

Thanks for your reply for my question: Is this a bug of Box API v2 when getting events
This is a new problem related to this. The problem is that I cannot reliably use the next_stream_position I got from previous calls to track events.
Given this scenario:
Given the following two GET HTTP queries:
1. GET https://api.box.com/2.0/events?stream_position=1336039062458
This one returns the JSON file which contains one file entry of myfile.pdf and the next stream position = 1336039062934
2. GET https://api.box.com/2.0/events?stream_position=1336039062934
This call uses the stream position I got from the first call. However, it returns the JSON contains the exactly same file entry of myfile.pdf with the first call.
I think if the first call gives a stream position, it should be used as a mark for that exact time (say: TIme A). If I use that stream position in subsequent queries, no events before "Time A" should be returned.
Is this a bug? Or did I use the API in the wrong way?
Many thanks.
Box’s /events endpoint is focused on delivering to you a highly reliable list of all the events relevant to your Box account. Events are registered against a time-sequenced list we call the stream_position. When you hit the /events API and pass in a stream_position we respond to you with the events that happened slightly before that stream position, up to the current stream_position, or the chunk_size, whichever is lesser. Due to timing lag and our preference to make sure you don’t miss some event, you may receive duplicate events when you call the /events API. You may also receive events that look like they are ‘before’ events that you’ve already received. Our philosophy is that it is better for you to know what has happened, than to be in the dark and miss something important.
Box events currently give you a window roughly 5 seconds into the past, so that you don't miss some event.
We have considered just delaying the events we send you by about 5 seconds and de-duplicating the events on our side, but at this point we've turned the dial more towards real-time. Let us know if you'd prefer a fully de-duped stream, that was slower.
For now, (in beta) if you write your client to check for duplicate events, and discard them, that will be best. We are about to add an event_id to the payload so you can de-duplicate on that. Until then, you'll have to look at a bunch of fields, depending on the event type... It's probably more challenging that it is worth.
In order to help you be able to figure out if an event is a duplicate, we have now added to each event an event_id that will be unique. It is our intention that the event_id will allow you to de-duplicate the responses you receive from subsequent GET /events calls.
You can see this reflected in the updated documentation here, including example payloads.