actionscript rss embedded in html - html

I'm new to actionscript and have some question:
i've written an rss reader using AS 3.0 in CS 5.5
when i press ctrl+Enter it reads my rss fead,
but when i publish it in html it just stucks on the picture(shown on the stage) and does nothing + shows the error of sandbox violation
i've spend all the day reading the documentation and understood that it's something with the domain restrictions or something like that, but still can't understand what to do exactly, can you please help me
this is the code of my swf file
russian.swf
var news_title:Array = new Array ();
var news_descr:Array = new Array ();
var news_pubdate:Array = new Array ();
var rus = "http://news.yandex.ua/index.rss";
test (rus,txt_descr,txt_title);
function test (link,txt_descr,txt_title)
{
var rssLoad:URLLoader = new URLLoader(new URLRequest(link));
rssLoad.addEventListener(Event.COMPLETE, end_rssLoad);
function end_rssLoad(rss_data:Event)
{
var rss_file:XML = new XML(rss_data.target.data);
for each (var item:XML in rss_file.channel.item)
{
news_title.push(item.title);
news_descr.push(item.description);
news_pubdate.push(item.pubDate);
}
show_rss();
}
function show_rss()
{
//number of news in rss field
var i:Number = 0;
//number of loops before update the field
var n:Number = 0;
function assign_rss_textBox()
{
txt_title.htmlText = news_title[i];
var blank_height = txt_descr.height;
txt_descr.htmlText = news_descr[i];
txt_descr.autoSize = "center";
txt_descr.y = txt_descr.y + (blank_height - txt_descr.height) / 2;
i+=1;
if (i >= news_title.length)
{
i=0;
n+=1;
if (n > 2)
{
clearInterval(delay_assign_rss_textBox);
}
}
}
assign_rss_textBox();
var delay_assign_rss_textBox = setInterval(assign_rss_textBox,500);
}
}
and this is the code of html page
<html>
<body>
<object width="600" height="125">
<param name="movie" value="russian.swf">
<param name="quality" value="high">
<embed src="russian.swf" quality=high width="600" height="125" ">
</embed>
</object>
</body>
</html>

As you've correctly identified, this is a cross-domain security issue caused by the fact that you are trying to load data from another domain into your Flash file. If you look at the crossdomain.xml on the domain on which the feed is stored, you will see that it only allows requests from the domain itself.
If you don't have any control over that cross domain policy, which I presume you don't, the usual solution would be to create a server-side proxy on your own domain to read the data and expose it to your SWF. This article explains the process quite nicely and includes an example script.
There's also a solution explained here which involves mirroring the feed in Feedburner and consuming it from there (presumably its cross domain policy is more lenient) rather than directly from the source feed.

Related

HTML5 audio seek is not working properly. Throws Response Content-Length mismatch Exception

I'm trying to stream audio file to Angular application where is html5 audio element and src set to my api end point (example. /audio/234). My backend is implemented with .NET Core 2.0. I have implemented already this kind of streaming: .NET Core| MVC pass audio file to html5 player. Enable seeking
Seek works if I don't seek to end of file immediately when audio starts playing. I use audio element's autoplay attribute to start playing immediately audio element has enough data. So in my situation audio element has not yet all the data when I seek so it make new GET to my API. In that situation in my backend log there is this Exception:
fail: Microsoft.AspNetCore.Server.Kestrel[13]
[1] Connection id "0HL9V370HAF39", Request id "0HL9V370HAF39:00000001": An unhandled exception was thrown by the application.
[1] System.InvalidOperationException: Response Content-Length mismatch: too few bytes written (0 of 6126919).
Here is my audio controller GET method.
byte[] audioArray = new byte[0];
//Here I load audio file from cloud
long fSize = audioArray.Length;
long startbyte = 0;
long endbyte = fSize - 1;
int statusCode = 200;
var rangeRequest = Request.Headers["Range"].ToString();
_logger.LogWarning(rangeRequest);
if (rangeRequest != "")
{
string[] range = Request.Headers["Range"].ToString().Split(new char[] { '=', '-' });
startbyte = Convert.ToInt64(range[1]);
if (range.Length > 2 && range[2] != "") endbyte = Convert.ToInt64(range[2]);
if (startbyte != 0 || endbyte != fSize - 1 || range.Length > 2 && range[2] == "")
{ statusCode = 206; }
}
_logger.LogWarning(startbyte.ToString());
long desSize = endbyte - startbyte + 1;
_logger.LogWarning(desSize.ToString());
_logger.LogWarning(fSize.ToString());
Response.StatusCode = statusCode;
Response.ContentType = "audio/mp3";
Response.Headers.Add("Content-Accept", Response.ContentType);
Response.Headers.Add("Content-Length", desSize.ToString());
Response.Headers.Add("Content-Range", string.Format("bytes {0}-{1}/{2}", startbyte, endbyte, fSize));
Response.Headers.Add("Accept-Ranges", "bytes");
Response.Headers.Remove("Cache-Control");
var stream = new MemoryStream(audioArray, (int)startbyte, (int)desSize);
return new FileStreamResult(stream, Response.ContentType)
{
FileDownloadName = track.Name
};
Am I missing some Header or what?
I didn't get this exception with .NET Core 1.1 but I'm not sure is it just coincident and/or bad testing. But if anybody has information is there something changed in .NET Core related to streaming I will appreciate that info.
Now when I research more I found this: https://learn.microsoft.com/en-us/aspnet/core/aspnetcore-2.0 look Enhanced HTTP header support- heading. It says this
If an application visitor requests content with a Range Request header, ASP.NET will recognize that and handle that header. If the requested content can be partially delivered, ASP.NET will appropriately skip and return just the requested set of bytes. You do not need to write any special handlers into your methods to adapt or handle this feature; it is automatically handled for you.
So all I need is some clean up when I move to .NET Core 1.1 to 2.0 because there is already handler for those headers.
byte[] audioArray = new byte[0];
//Here I get my MP3 file from cloud
var stream = new MemoryStream(audioArray);
return new FileStreamResult(stream, "audio/mp3")
{
FileDownloadName = track.Name
};
Problem was in Headers. I don't know exactly which header was incorrect or was my stream initialization incorrect but now It's working. I used this https://stackoverflow.com/a/35920244/8081009 . Only change I make this was renamed it as AudioStreamResult. And then I used it like this:
Response.ContentType = "audio/mp3";
Response.Headers.Add("Content-Accept", Response.ContentType);
Response.Headers.Remove("Cache-Control");
var stream = new MemoryStream(audioArray);
return new AudioStreamResult(stream, Response.ContentType)
{
FileDownloadName = track.Name
};
Notice that I pass full stream to AudioStreamResult.
var stream = new MemoryStream(audioArray);

Action script, NativeProcess , resolvePath and swf does not work

I will expose my problem but first I have to show you my configuration to give you all the details.
I have 2 Virtual Machines, 2 windows 7. The first one, it is where I developp all my Action Scripts, where there is my Development Environment(IDE) and second one there is nothing special installed. On both there is Adobe AIR and Adobe Flash Player.
Ok, here is my problem. I develop (on first one) a script that uses NativeProcess to run a CMD.exe that load in command line a dll.
And when I Build&Run the project everything is ok, I check and the dll is loaded. But the problem is when the second Windows connected into my localhost website (to the first windows that play as a server) and run the file "myProgram.swf" (the ActionScript program) that do not load my dll.
Now I print you all my code :
This is the script that loads the dll "myProgram.swf" :
public class NativeProcessExample extends Sprite
{
public var process:NativeProcess;
public function NativeProcessExample()
{
if(NativeProcess.isSupported)
{
setupAndLaunch();
}
else
{
trace("NativeProcess not supported.");
}
}
public function setupAndLaunch():void
{
var fmt:TextFormat = new TextFormat();
var txt:TextField = new TextField();
fmt.size = 32;
txt.text = 'Hello, world!' + '\n' +
'Width = ' + stage.fullScreenWidth + '\n' +
'Height = ' + stage.fullScreenHeight;
txt.setTextFormat(fmt);
txt.autoSize = TextFieldAutoSize.LEFT;
addChild(txt);
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
var file:File = File.applicationDirectory.resolvePath("C:\\Windows\\System32\\regsvr32.exe");
nativeProcessStartupInfo.executable = file;
var args:Vector.<String> = new Vector.<String>();
args.push("C:\\Users\\myUser\\Downloads\\myDLL.dll");
nativeProcessStartupInfo.arguments = args;
var process:NativeProcess = new NativeProcess();
process.start(nativeProcessStartupInfo);
process.addEventListener(NativeProcessExitEvent.EXIT, exitHandler);
I cut (I deleted all includes and end part) the script cause its too long but here is the most interesting part.
Now I will show you my "index.php" where the 2nd Windows connected to recover and inject the dll. :
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
<style type=\"text/css\">
body, html
{
width:100%;
height:100%;
overflow:hidden;
}
#SWFSquare
{
height: 200px;
width: 200px;
background-color: blue;
}
</style>
<script type="text/javascript" src="swfobject.js"></script>
<script src="https://code.jquery.com/jquery-1.12.0.min.js"></script>
<script src="https://code.jquery.com/jquery-migrate-1.2.1.min.js"></script>
</head>
<body bgcolor="#ffdfaf">
<div id="SWFSquare">
</div>
<input type="button" value="Download" id="buttonDownload" style="margin-left: auto; margin-right: auto; display: block;">
<script type="text/javascript">
$(function() {
$("#buttonDownload").click(function() {
window.open("myDLL.dll");
myFunction();
});
function myFunction() {
setTimeout(function(){
var element = document.getElementById("SWFSquare");
swfobject.embedSWF("myProgram.swf", element, 300, 120, 10);
},10000);
}
});
</script>
</body>
</html>
So I hope you have all needed information. Do not hesitate to ask me for more information.
So to remind. When I launch my script on 1st Windows under my Development Environment (IDE) everything works my DLL is loaded but when I try do load it with 2nd Windows by connected to index.php (=1st Windows as a server) the SWF works cause i get the message "HelloWorld" on the page but the dll is not loaded...
Can you help me ? I work on this for 2 weeks :-(.
First of all, Thank you guys for the quick response :-)
So, I will answer "Akmozo's question :
As you see on the description of my ActionScript it will use "NativeProcess" to run the cmd that will execute a command to load myDLL.dll
So, I just have to execute the swf to start all of this. That is the relation between AIR app and swf. I work on FlashDevelop environment and every script "myProgram.as" that you "Build&Runs" create a "myProgram.swf" file. Once I get this file (automatically created) I just have to run it through the web by my "index.php" and more precisely by this code :
var element = document.getElementById("SWFSquare");
swfobject.embedSWF("myProgram.swf", element, 300, 120, 10);
So, when 2nd windows connected to index.php that run the myProgram.swf and finally I have not dll loaded...
That's my problem. Did I answer you "Akzmozo" ?
Now, for your answer "VC.one" I think it should be possible to do it on the environment I especially prepared.
That is to say :
1st Windows with last update and patches
2nd Windows with no update and no last Flash Player (currently is 19.0.0.206)
I'm an IT security researcher (student) and that's why I'm working now on a breach in Adobe Flash Player 19. Normally, it possible to do it because there is already a CVE on this work, and I would (re) create this scenario. But I'm always stuck on this problem and I think I missed something but I don't know what it is...
But I'm always stuck on this problem and I think I missed something
but I don't know what it is...
#Akmozo is correct. Flash Player (browser) & AIR (OS app) are two different ways to run AS3 code as an application. They don't always work the same (an AS3 app rendered by browser Flash Player plugin is much more limited for security reasons, it cannot run programs on a computer otherwise hackers & virus creators would have found heaven with this power, spreading chaos via internet).
Also think about what happens if the SWF is run from a Mac or Linux browser? How do these OS load the dll (since it's a Windows-only file)? This breaks the rule that code in browser works same everywhere, regardless of platform.
Just to prove a point... update your textfield code to look like this below. In IDE testing it should say (NP) Support = true but when in browser you will get = false. Of course when its false then you cannot load the dll from a browser.
var fmt:TextFormat = new TextFormat();
var txt:TextField = new TextField();
fmt.size = 32;
txt.text = 'Hello, world!' + '\n' +
'Width = ' + stage.fullScreenWidth + '\n' +
'Height = ' + stage.fullScreenHeight + '\n' +
'(NP) Support = ' + String(NativeProcess.isSupported); //# check if available
txt.setTextFormat(fmt);
txt.autoSize = TextFieldAutoSize.LEFT;
addChild(txt);

Can I use a local file as a source in a live page?

I like to use JSFiddle when designing a new interface because I find it convenient for various tools within. I'm working on the front end of a site where I want to use a video, and unlike an image, I cant just throw it up on imgur and link to it for free instant hosting while I fiddle with the interface design.
So I want to know if I can somehow use a local file on my PC as the source for an HTML video element hosted on a live site. Obviously this is trivial to do with a web project being worked on on my Desktop, but I'm not sure it can be done on a live test.
For example this would work on a page I open from my desktop, living on my PC:
<video id="Video-Player">
<source src="../movie.mp4" type="video/mp4"/>
</video>
But I don't know whether I can do the equivalent with a page living on the web.
Here's how to allow a user to select an image from their local machine. This should get you started in the right direction.
Add a file input button in the HTML
<input type="file" id="file-btn"/>
and the corresponding handler
document.getElementById('file-btn').addEventListener('change', function(e){
readFiles(e.target.files);
})
Then the code to read the files
function readFiles(files){
files = [].slice.call(files); //turning files into a normal array
for (var file of files){
var reader = new FileReader();
reader.onload = createOnLoadHandler(file);
//there are also reader.onerror reader.onloadstart, reader.onprogress, and reader.onloadend handlers
reader.readAsDataURL(file);
}
}
Now, I've only done this with images, but this is how I read the image data.
function createOnLoadHandler(file){
console.log('reading ' + file.name + ' of type ' + file.type)
function onLoad(e){
var data = e.target.result
display(data);
}
return onLoad
}
function display(data){
var img = document.createElement('img');
img.src = data;
var context = canvas.getContext('2d')
context.clearRect(0, 0, WIDTH, HEIGHT);
context.drawImage(img, 0, 0, WIDTH, HEIGHT);
}
Here is a demo of the above code.
As a side note, if you try to read images from another domain you'll run into cross origin policy issues. I would think the same problem exists for videos as well.

How to get MP4 using JavaCV for live video streaming

I am trying to make an application that gets the streaming video data using JavaCV and send it to Web socket server. Then, the Web socket server distributes the video data to connected client(s).
1. Application gets live streaming data(MP4) from my PC's webcam using JavaCV.
2. Application keeps sending it to web socket server.
3. The server gets it as binary and send it to connected client.
4. Web browser connects to the server and Javascript running on browser shows the live video after receiving it from server.
I am new to JavaCV and OpenCV. The snippet below works for drawing video using com.googlecode.javacv.CanvasFrame with no problem. However, I am not sure how to grab MP4 data as live streaming data.
try {
FrameGrabber grabber = FrameGrabber.createDefault(0);
grabber.setFormat("mp4");;
grabber.setFrameRate(30);
grabber.setImageWidth(640);
grabber.setImageHeight(480);
grabber.start();
double frameRate = grabber.getFrameRate();
long wait = (long) (1000 / (frameRate == 0 ? 10 : frameRate));
ByteBuffer buf = null;
while (true) {
Thread.sleep(wait);
IplImage image = grabber.grab();
if(image != null) {
buf = image.getByteBuffer();
send(buf); // Send video data using web socket.
}
}
} catch (FrameGrabber.Exception ex) {
Logger.getLogger(WSockClient.class.getName()).log(Level.SEVERE, null, ex);
}
The below are the code for web socket server and Javascript/HTML5 client.
The both code works fine if the application reads MP4 file from local disk, and send it to the server. But the data using JavaCV like above seems to be invalid for showing on browser.
The server just receives the data and distribute it to the clients. I believe there seems to have no problem. Here is the code.
#OnMessage
public void binaryMessage(ByteBuffer buf, Session client) throws IOException, EncodeException {
for (Session otherSession : peers) {
if (!otherSession.equals(client)) {
otherSession.getAsyncRemote().sendBinary(buf, new StreamHandler());
}
}
}
Here is the Javascript/HTML5.
<body>
<video controls width="640" height="480" autoplay></video><br>
<canvas id="canvas1" width="640" height="480"></canvas><br>
</body>
<script>
var ws;
var protocol = 'ws';
var host = "localhost:8080";
var url = protocol + "://" + host + "/live/stream";
ws = new WebSocket(url);
ws.binaryType = 'arraybuffer';
ws.addEventListener("open",onOpenWebSocket,false);
ws.addEventListener("close",onCloseWebSocket,false);
ws.addEventListener("message",onMessageWebSocket,false);
window.addEventListener("unload",onUnload,false);
...
function onMessageWebSocket(event){
var blob = new Blob([event.data], {type: 'video/mp4'});
var rb = blob.slice(0, blob.size, 'video/mp4');
video.src = window.URL.createObjectURL(rb);
}
...
</script>
I think I need to obtain MP4 data as stream using JavaCV, but don't know how to do that. Please help.
Any comment, suggestions would be appreciated. Thank you.

Get chrome tabs and windows from localStorage

I am trying to access tabs and windows data inside a Google Chrome extension. I've apparently managed to get this info and loading it through localStorage but I don't know how to use the information, since I can't seem to parse the data back to arrays of objects through JSON parse.
Here's the code:
<html>
<head>
<script>
tabs = {};
tabIds = [];
focusedWindowId = undefined;
currentWindowId = undefined;
localStorage.windowsTabsArray = undefined;
function loadItUp() {
return arrays = chrome.windows.getAll({ populate: true }, function(windowList) {
tabs = {};
tabIds = [];
var groupsarr = new Array();
var tabsarr = new Array();
var groupstabs = new Array();
for (var i = 0; i < windowList.length; i++) {
windowList[i].current = (windowList[i].id == currentWindowId);
windowList[i].focused = (windowList[i].id == focusedWindowId);
groupsarr[windowList[i].id] = "Untitled"+i;
for (var j = 0; j < windowList[i].tabs.length; j++) {
tabsarr[windowList[i].tabs[j].id] = windowList[i].tabs[j];
groupstabs[windowList[i].id] = windowList[i].tabs;
}
}
localStorage.groupsArray = JSON.stringify(groupsarr);
localStorage.tabsArray = JSON.stringify(tabsarr);
localStorage.groupsTabsArray = JSON.stringify(groupstabs);
});
}
function addGroup() {
var name = prompt("NEW_GROUP_NAME");
var groupsarr = JSON.parse(localStorage.groupsArray);
groupsarr.push(name);
localStorage.groupsArray = JSON.stringify(groupsarr);
}
</script>
</head>
<body onload="loadItUp()">
WINDOW_QTY:
<script type="text/javascript">
var wArray = JSON.parse(localStorage.groupsArray);
document.write(wArray);
</script>
<br/>
TABS_QTY:
<script type="text/javascript">
var tArray = JSON.parse(localStorage.tabsArray)
document.write(tArray);
</script>
<br/>
WINDOWS_TABS_QTY:
<script type="text/javascript">
document.write(JSON.parse(localStorage.groupsTabsArray));
</script>
<br/>
</body>
</html>
1)
The page shows bunch of [object Object].
That's expected, objects are implicitly converted to string when you call document.write(tArray);; custom object without a custom toString implementation are converted to "[object Object]". It doesn't mean they're not "parsed".
To inspect the object you can use the Developer Tools. You can open the inspector for a background page from the Extensions page and if you get your page to open in a tab (e.g. if you use chrome_url_overrides) you can inspect it as you would inspect a regular web page.
If you replace the document.write calls with console.log(), you'll be able to inspect the objects in the Developer Tools' console.
2)
Do you realize that the document.write calls in tags run before loadItUp()?
Had no idea that the page code was being executed before loadItUp().
Scripts are executed at the moment they are inserted in the DOM by the parser (unless they are deferred or async) - see MDC documentation on <script>, - while various load events, in particular <body onload=...>, are executed after the page is finished parsing.
So right now your document.write calls print the values that were saved to localStorage the previous time the page was loaded, it's probably not what you wanted.
Instead of using document.write() from inline scripts, you should use element.innerHTML or element.textContent to update the page's text. There are many ways to get a reference to the element you need, document.getElementById() is one.
3)
Last, note that not every object can be saved to and then loaded from localStorage. For example, methods will not survive the round-trip, and the identity of the object is not preserved, meaning that the object you got from a Chrome API will not be the same object after you store it in localStorage and load it back.
You have not explained why you think you need localStorage - it's used when you want to preserve some data after the page is closed and reloaded - so maybe you don't really need it?