I'm using an API to get some data, but one of the JSON values, contains HTML inside of it. Is there a way for me to use the HTML tags that are inside the JSON value exactly like they are shown, instead of having to make the exact same HTML tags in my MVC view, and editing these ones out? If this wasnt descriptive enough heres an example of what i mean:
This is just one of the JSON values, I didnt wanna paste the whole thing:
"detail": "<h1>Overwatch Patch Notes – October 19, 2016</h1>\r\n\r\n<p>A new patch is now live on Windows PC. Read below to learn more about the latest changes.</p>\r\n\r\n<p>To share your feedback, please post in the General Discussion forum.<br />\r\nFor a list of known issues, visit our Bug Report forum.<br />\r\nFor troubleshooting assistance, visit our Technical Support forum.</p>\r\n\r\n<p>Please note that these changes will be rolled into a larger patch for PlayStation 4 and Xbox One at a later date.</p>\r\n\r\n<h2>BUG FIXES</h2>\r\n\r\n<p><strong>General</strong></p>\r\n\r\n<ul>\r\n\t<li>Fixed an issue causing the default Overwatch spray to override a player’s chosen spray when watching a Play of the Game or Highlight</li>\r\n\t<li>Fixed an issue causing players to frequently disconnect while viewing Highlights</li>\r\n\t<li>Fixed a bug preventing the appropriate music from playing after a loss on the Junkenstein's Revenge Brawl</li>\r\n</ul>\r\n\r\n<p><strong>Gameplay</strong></p>\r\n\r\n<ul>\r\n\t<li>Fixed a bug causing multiple issues when displaying data on the leaderboards</li>\r\n</ul>\r\n\r\n<p><strong>Heroes</strong></p>\r\n\r\n<ul>\r\n\t<li>Fixed an issue preventing Ana’s Nano Boost callouts from being heard by the enemy team</li>\r\n\t<li>Fixed a bug preventing players from receiving credit toward the Healing Done commendation when healing D.Va’s mech</li>\r\n\t<li>Fixed a graphical issue that was preventing the liquid in Mei’s Endothermic Blaster from appearing</li>\r\n\t<li>Fixed a bug causing Reinhardt’s Charge to unexpectedly stop when crossing certain thresholds (e.g. when exiting a dropship)</li>\r\n\t<li>Increased the volume of Roadhog’s “Want some candy” voice line</li>\r\n</ul>\r\n\r\n<p><strong>Map</strong></p>\r\n\r\n<ul>\r\n\t<li>Fixed a bug on Eichenwalde that caused some textures to stretch across the map for some players</li>\r\n</ul>\r\n".
You can use Html.Raw inside your view to use the raw string value:
#Html.Raw(Model.detail);
https://msdn.microsoft.com/en-us/library/gg480740(v=vs.118).aspx
Related
Several weeks ago I have been asked to upgrade a web application based on a very old version of MXGraph library (version 2.4). The application integrated also the 'grapheditor' a sort of demo application evolved later in Diagramly
and then in Draw.io). Recently I completed the more problematic step, the transition from old "grapheditor" to Draw.io, so I am now able to open all the previous diagrams (saved as plain XML), modify and save them consistently.
Ok, this is the nice part. The bad side is the 'read-only' section of the application ,where the users can more or less, only view the graph.
This page is based on the mxClient.js that renders the graph described in the xml through this code:
var graph = new mxGraph(container);
var diagram = mxUtils.parseXml(xml);
var codec = new mxCodec(diagram);
codec.decode(diagram.documentElement, graph.getModel());
graph.fit();
Upgrading the MX library to the last version (3.9.10) the same code works but some shapes are not rendered properly, they appears as squares instead of
circles, ellipses, etc. The two following images are an example of this misbehavior
Graph in the draw.io:
Same graph rendered by mxClient:
After some tries I discovered that the old mxClient is able to render the same graph perfectly (as draw.io does) so I think there have to be something wrong (or missing) in my code or mxGraph installation/configuration.
As a temporary workaround I can keep in place the old version of mxGraph but obviously I'd like to use the new one.
Can someone give me an hint on this? Any help would be very appreciated.
The tape shape isn't part of core mxGraph, it's part of the GraphEditor example, in the additional shapes JavaScript.
If you look at the style of the ellipse, it's probably not the one in the core, most likely another one from Shapes.js.
Either pull in shapes.js, or use the viewer in draw.io.
I have a app where I play different code-generated sounds. I place these sounds in a AudioBufferSourceNode.
I allow the the user to choose what output device to play the sound through, so I use a MediaStreamAudioDestinationNode with its stream used as the source for an Audio Element. This way when the user chooses an audio output to play the sound to, I set the Sink Id of the Audio element to the requested audio output.
So I have AudioBufferSourceNode -> some Audio Graph (gain nodes, etc) -> MediaStreamAudioDestinationNode -> Audio element.
When I Play the first sound, it sound fine. But when I create a new source and connect it to the same MediaStreamAudioDestinationNode, the sound is played with the wrong pitch.
I created a Fiddle that shows the problem.
Is this a bug, or am I doing something wrong?
The problem was identified based on the OP Chrome Ticket.
It seems to come from the lack of sync between AudioElement and its source AudioNode (AudioBufferSourceNode, OscillatorNode, etc.) when you pause the source and play it back again.
The solution is to always call AudioElement.pause() and AudioElement.start() alongside your source stop and start.
https://jsfiddle.net/k1r7o0xj/3/
It's possible to dynamically change your graph layout by using .connect() and .disconnect(), even when audio is playing or sent through a stream (which could even be streamed over WebRTC).
I couldn't find a reference in the spec, so I'm pretty sure this is taken for granted.
For example, if you have two AudioBufferSourceNodes bufferSource1 and bufferSource2, and a MediaStreamAudioDestinationNode streamDestination:
bufferSource1.connect(streamDestination);
//do some other things here, and after some time, switch to bufferSource2:
//(streamDestination doesn't need to be explicitly specified here)
bufferSource1.disconnect(streamDestination);
bufferSource2.connect(streamDestination);
Example in action.
Edit 1:
Proper implementation:
According to the Editors Draft on the Audio Output API, it is planned/will be possible to choose a custom audio output device for the AudioContext as well (by means of new AudioContext({ sinkId: requestedSinkId });). I couldn't find any info on the progress, and even found a related discussion which the asker apparently read already. According to this and (many) other references, it doesn't seem te be an easy task, but it's planned for WA V1.
Edit:
That section has been removed from the API Draft, but you can still find it in an older version.
Current workaround:
I played around with your workaround (using a MediaStreamAudioDestinationNode and Audio object), and it seems to be related to nothing being connected. I modified my example to toggle a single buffer (similar to your example but with an AudioBufferSourceNode), and observed a similar frequency drop. However, when using a GainNode inbetween and setting it's gain.value to either 0 or 1, the frequency drops disappeared (this isn't gonna be the solution if you want to create and connect new AudioBuffers dynamically).
I am using Vimeo's Flash API so that I can embed and read the timecode of a video using the playProgressHandler, pause it at certain times, pop a menu, and use buttons that trigger seekTo calls. Although everything works, the timecode is inaccurate to varying degrees. Anywhere from 1-2 seconds. I can tell this because:
1) If I play my video on Vimeo and pause it at 6:03 and do the same with it embedded in Flash the visuals do not match up. Flash is lagging behind a tad.
2) I did a test using the JavaScript API. My seekTo calls were consistently accurate. To seek to the same spot using the AS3 API I had to add 1.5 seconds. But even this isn't foolproof. Sometimes it works, but sometimes it's still off.
Any ideas what would account for this inaccuracy and how I might fix this problem? Yes, I can ditch the AS3 and use the JS version, but I'd prefer to just fix what I've already built.
(I also posted this on Vimeo's forum, but I'm following their "Limited support in API Forum" post which suggests to post here)
Unfortunately, there's not much we can do to fix this other than to recommend that you use our iframe embed.
It has to do with the way that we retrieve files from our CDN. Because Flash doesn't support byterange requests, we pass a parameter that returns part of the file starting at that position. The nature of how that works means it's always going to be imprecise.
I am currently working on a rather large, UI-heavy Flash game. Our team has been working on this for about 9 months now. None of us had any previous experience with Flash, so we have continually improved our workflows during this time. However, we still feel that what we are doing now is not optimal, especially the interface between coders and artists, so I am wondering how other teams are working.
The ideal workflow should satisfy the following requirements:
1. Reused UI elements are defined only once
This means, if we want to change a font or a button style, we do not want to go trough all of our menus and change them manually. We want them defined in one central place and only referenced from there. Bonus points if the assets are shared not only at edit time but also at runtime, i.e. they are downloaded only once.
2. Everything is loaded on demand
Currently, we have two different loading steps: First, we load the menu libraries. When this is done, the players can already interact with all the menus. Then, we start loading the actual gameplay data. The initial loading time is still too long, though, and causes us to lose many potential players. What we really want to do is to load only the bare minimum required for the main menu and then load everything else only when the player tries to actually open the respective menus. Zuma Blitz does this really well.
3. Artists can perform minor changes without help from coders
If a menu should be re-designed without changing the actual functionality, it should be possible for artists to do that on their own in Flash CS6. This requires a clear interface between art and code, and it should also be possible for artists to test and debug their changes before sending them to the coders.
-
Our current workflow looks like this: The artist build the screens as MovieClips in Flash CS6 and export them as SWFs. On the code side, load the MovieClips from the screen SWFs and use them as the View classes in our PureMVC-based system. The Mediators access the elements like text fields in the Views by their instance names.
This is error-prone because there is no central place to define the interface (i.e. the instance names). A lot of communication overhead between coder and artist is required. Also, it creates a dependency between the code and the internal structure of the movieclip. The artists cannot attach the text field to a different sub-movieclip when they want to apply some effects to it.
We are experimenting with an event-based interface that requires the artist to add a few lines of code to the movieclip. This is less error-prone and interdependent than before, but it still does not completely satisfy (3) unless we write additional tools for testing and debugging. This must be a common problem and I can hardly imagine that there is no easier way.
For (2), we also started building a home-brewed solution but again, this is such a common task, there has to be something out there already that we can use.
So, how do experienced Flash developers manage such large projects?
I have some thoughts, but they are based on my coding style, which is unique to me.
1. Reused UI elements are defined only once
Depending on what you're reusing, this can be as simple as defining a library symbol and just using it. Fonts can be replaced without digging with a search and replace, and you can also simply swap out the font in the Font Embedding menu.
For sharing across xfl's, you can use a Flash Pro Project. Keep in mind that there's a certain amount of time overhead involved in this (files will want to update when you open them or save them, Flash crashes more with Projects, and it can be a bad idea to try to work on two files from the same project at once).
Some people use swcs, but doing so requires that you instantiate things in it in code, which might not work for your workflow. I use them for audio only, and I find that the objects in it have to be compiled on or before the frame you designate as the AS compile frame, or the sound can't be properly instantiated. I suspect this is going to be the case for anything instantiated from a swc.
2. Everything is loaded on demand
One of the best-kept secrets of Flash is that this is trivially easy to accomplish using the timeline and educated use of the complier. Here's how it works:
If your ActionScript compile frame is a frame greater than 1, then here is how things will compile:
Before Frame 1:
Any visual assets and embedded sounds used on frame 1
Your main Document Class, plus any Classes directly referenced from the Document Class (which is a good reason to code to Interfaces)
Before your AS compile frame (N):
Your AS Classes (the code, not necessarily the visual/audio assets)
The visual and audio assets for any library symbols set to Export for AS in frame N (even if they are not used in the swf)
Before the frame where the asset is first used on the timeline:
The visual/audio assets in all library symbols where Export for AS in frame N is not checked.
If you put a spinner loading graphic on frame 1 and you have selected frame 10 as your export frame, then if you just let the movie play until it hits frame 10, here is how it will load:
If you have any heavy assets in your spinner or directly referenced in your main Document Class, users will see a blank screen while this stuff downloads
The spinner will become visible and spin
Once your AS Classes have loaded, along with the Library Symbols set to Export in Frame 10 and the assets that are actually on Frame 10, you'll see those assets, and everything you need to use them will be ready.
The rest of the swf will continue to load in the background, updating framesLoaded.
I actually use a combination of a setter for the object that's on frame 10, plus an ENTER_FRAME handler to check to see if we're on frame 10 yet. There are certain things that I have to do that are easier based on one and others that work better to do the other way.
3. Artists can perform minor changes without help from coders
If the code is all in the Base Class for the library symbol, artists don't need to understand it, as long as they don't remove or change a needed instance name. I try to minimize dependence on instance names by watching ADDED_TO_STAGE (capture phase) and watching for Display Objects by type. Once I have a reference to an object of the appropriate type, it's easy enough to watch for REMOVED_FROM_STAGE on that object to dereference it. This is similar to how frameworks such as RobotLegs and Swiz work.
Further, I use a concept I call "Semantic Flash," where I do a lot based on labels. I have a base Class, FrameLabelCip, which has built-in nextLabel() and previousLabel() functionality, as well as dispatching FRAME_LABEL_CONSTRUCTED events. It's really easy to go from storyboard event name to Flash label name and just build out the graphics bang-bang-bang.
I make heavy use of Graphic Symbols for synchronizing graphics across multiple labels (for example, bulleted lists), instead of relying on code. This animator's trick makes these things both robust and approachable to less-technical teammates.
Okay, so I've been busting my hump the last week or so on this project for my OOP/AS3 course and this past Sunday I realized that my approach wasn't going to work so I scrapped the better part of it and started over.
Our assignment is to create an XML based flash menu that demonstrates an understanding of the OOP patterns we've just learned. It was kind of a 'test the waters' project where he gave us a ton of tutorials and information and told us to make our best attempt at making sense of it so I'm certain there are more efficient ways to do what I'm doing, but that's a moot point.
We need to employ at least two patterns in our menu, though at the moment I'm just focusing on MVC so that I can get the mainUI working before I finalize the second part of the UI. It essentially flows like so:
MainUI has 4 menus that slide out.
Each slider has 3 thumbnails on it.
Clicking on any of the thumbnails will move to the next part of the UI. This functionality is currently disabled.
The program runs with 0 compiler errors, but the images are not being placed on the stage correctly and I can't figure out why. All the image paths are being pulled and stored from the XML properly. The main background image is pulled once and is supposed to be only placed once (if statement that uses a count to determine whether to run the placement function or not), but it is being placed 4 times with the sliding menu image. The sliders are being placed in the correct positions (switch statement that iterates through the mainUI function in the View class and creates a separate loader for each one), but the thumbnails are not all showing up. So here is what I'm seeking help with:
The mainPanel image should only be placed once, rather than 4 times with each slider.
The sliders, while being placed correctly, must be tweened in different directions through the as (using TweenMax), but each instance is unidentifiable from the other so right now they all have an eventListener that calls the same tween method. How can I distinguish them in a way that lets me apply a different tween to each (This will likely be a concern with the thumbnail functionality later as I will need to load different XML data based on which thumb is clicked).
I have added what I hope are very informative comments to each script so hopefully people can help. Also included are images of what I want the mainUI to eventually look like and how it's coming out currently.
pastebin with all 3 classes and XML (2 hyperlink limit) - http://pastebin.com/u/crookedparadigm
top image is how the stage is outputting, bottom image is what I'd like to to be - http://imgur.com/a/bOmsS
Last quick note, stage is currently set to 600x480 with a black background. Ideally, to reinforce OOP principles, our professor wants us to avoid using the timeline or library if possible.
Any advice at all will be greatly appreciated! Thanks!
Install FlexPMD This is a very good add on( sometimes hard to install ) It basically is used to show areas of your code that you are not following standards. For example your classes lack the use of "this". And you should avoid passing parameters in constructors. It would be good practice to develop standardized writing skills while you are still new.
Looking at your code I see you are calling buildUI from within a loop.
buildUI is assigning a MainView object to mainUI.
So each time you go through a loop iteration you are reassigning mainUI.
In the end mainUI will only be the last iteration of that loop.
Not sure this is your issue but is an issue.
[EDIT]
Excellent Singleton guide for Flex SDK
Part 1
Part 2
Some Good writing on pure AS3 Singletons.
I would prolly start from scratch as your XML data is miss formatted.
your XML should resemble something like this.
<MainProject>
<MainUI>
<Thumbnail Name="Spring">
<Destination Name="Spring" Price="99" ratingPath="images/SP1/SP1rating.png" />
</Thumbnail>
<Thumbnail Name="Winter">
<Destination Name="Winter" Price="152" ratingPath="images/SP1/SP2rating.png" />
</Thumbnail>
</MainUI>
</MainProject>
Then you should have the following structure on your stage. These movieclips should be empty and already placed inside on your stage with instance name.
Stage
MenuUI MovieClip
ThumbNail1 MovieClip <- feed it thumbnail from the XML
ThumbNail2 MovieClip <- feed it thumbnail from the XML
ThumbNail3 MovieClip <- feed it thumbnail from the XML
ThumbNail4 MovieClip <- feed it thumbnail from the XML
This might be a bit too vague, just tell me if you need more details.
Hope this helps !