I'm looking for something like CATiledLayer (on iOS), but for use in PhoneGap/Sencha Touch 2.
The idea is to "stream" a very large image from a server in form of tiles - very much like Google Maps does the job.
It should support touch gestures on mobile devices so a user can pinch zoom and scroll.
Unfortunately I couldn't find something - so a few pointers are highly appreciated.
Update:
In the meanwhile I took a look at OpenLayers, which seems to what I want and manages multiple layers of zooming and such. Unfortunately it is hooked too much into Geospartial data and there is no way to disable projections to make it work like a basic image viewer.
I also found GSV (Big Ass Image Viewer) - unfortunately I doesn't support touch gestures and generally seems to be abandoned.
To me it's just weird that nobody really had the need for something like this and I try to prevent "re-inventing the wheel" as much as I can. But right now it does not look like there are any non-geodata/map related solutions.
I've never seen anything like this done using Sencha Touch 2, though I'm going to assume that putting something like this together wouldn't require much.
It would basically be your custom component (which would be scrollable). Inside that component you would insert 1 child which would be the size of your image (lets say; 15000px x 15000px) so it overflows. You would then listen to the scroll event on the parent container and when it gets to a certain x/y state, update the child item with that section of the image.
I'm not sure what the best solution for the child's HTML would be. Perhaps a bunch of divs, but I'm thinking <canvas> would be best for images.
Related
I'm trying to build myself a very simple framework to manage drawing content to the same webgl canvas via multiple views and react. I want to able to use the same resource in different areas of the window, so i'm trying to avoid using multiple canvases.
The end result should be something like this example from three.js.
I'm pretty confused with how to manage this from the html side and am struggling to figure out if any limitations of this approach should be considered.
<WebGLContext.Provider value={contextState}>
<canvas ref={canvasRef} style={canvasStyle} />
{children}
</WebGLContext.Provider>
This is my top level wrapper. With this, i can instantiate a <WebGLView/> wherever and get the rectangle to be used as the "viewport" into the canvas. Just like in the example i scissor out that rect and draw some content there. Because my entire react app renders on top of this, i can put any content over that view. But... i can also obscure it. This only works if divs above it are transparent, or there is hardly any overlap between these viewports.
The view is something like:
<div ref={viewRef}>
{children}
</div>
Another approach that i had in mind is to use react portals to manage another layer, above the canvas.
Something like this:
<WebGLContext.Provider value={contextState}>
{children}
<canvas ref={canvasRef} style={canvasStyle} />
<div ref={aboveCanvasPortal}/>
</WebGLContext.Provider>
Since i know the rectangle of the viewport for my webgl drawing, i can manage the html above it in a similar way, draw an absolutely positioned div in it, and put some UI content in there. This also doesn't feel like it would scale very well, but i could at least have a scrollable column with a background color, a webgl view in it, and some ui on top of it. Overlapping components would probably crash this.
The view is something like:
<div ref={viewRef}>
{ReactDOM.createPortal(children, aboveWebgl)
</div>
I've been thinking of using toDataURL() and then passing it as an image background to the views. This seems like it would solve the stacking/overlapping issue, and i could have a very simple html structure. But this is also a tremendous amount of overhead to add to webgl? If so, is there a way to do it cheaper, since the browser has to compose all of this somehow anyway?
Use case wise, my main use case is to use it with something like react-mosaic, where i just have a bunch of rectangles, very flat within one viewport, a div or the window. The second approach feels like it would work best. And then perhaps if i put a modal on top of that, creating another layer of below,canvas,above, html, would make sense, but like no more than that?
When taking a deeper look into the code of the three.js example you have provided, you will note, that there's just a simple <canvas id="c"></canvas> without any wrapping at all.
The key to your question is not to think primarily about viewports, but about Scissor Boxes -- as used in the aforementioned example in its function render(). If you prefer (like me) to use raw WebGL instead of three.js, take a look at the MDN Doc on WebGLRenderingContext.scissor() and on basic scissoring as a starting point.
That should reduce the complexity of your problem and return it back to the level of (a more performant) WebGL, instead of trying to patchwork on HTML level.
I have a problem that seems pretty simple to me, but so far it was impossible to find a simple solution: On my website, whenever the Android soft keyboard pops up, it resizes the window and shrinks the content, instead of just overlaying the page.
See these pictures for reference:
The first two are the current situation, the third is what I want. It works like this on iOS. What can I do to make it work that way?
The screenshots were taken in Firefox - this is a website based on HTML, not a native app.
I tried setting body size and position, but so far, no luck. I've seen some very complicated JS code snippets for similar problems, but I didn't get any of them to work the way I want, and it also seems like there should be an easier way around it. The sizes of all the elements are determined with vh and wv. Setting fixed pixel values seems like it would kill the responsiveness of the design, no?
I'm not a very experienced developer, my page is just very basic HTML and CSS. Is there a way to achieve what I want with only that?
On your manifest.xml you can set android:windowSoftInputMode to adjustPan.
<activity
android:name=".WebActivity"
android:windowSoftInputMode="adjustPan" />
From Android documentation:
Don't resize the window to make room for the soft input area; instead
pan the contents of the window as focus moves inside of it so that the
user can see what they are typing. This is generally less desireable
than panning because the user may need to close the input area to get
at and interact with parts of the window
I have the following doubt, in an app that I'm developing sometimes I need to show a video that will sit in the top right corner, like this:
The thing is, I need this to work in mobile, and if I've read correctly, there is no way to have more than one stage3D instance, so this approach wont work.
The other option is, to convert the static (cyan) area to native display list, which solves the problem, but I could maybe run into performance issues?, I will need to test.
Any other ideas to have a video like in the layout?.
I'm new with html5 and I'm trying to go through with drag and drop functionality.
This is the scenario: two different divs (or canvas) side by side.
What I want to do: drag the first div (or canvas) and drop on the second div.
Which result I expect: The first div content is now on the second div position and the second div content is on the first div position. Like a normal switch.
I'm pretty sure that is possible, but the only think I've done until now, was to append the first div into the second div.
Yes, of course it is possible, but the answer depends a bit on what the purpose is... If you want to learn how drag and drop works in HTML5, then write your own implementation based on the tutorial you mentioned. But if you want to build something for production within reasonable time, use a library - there are lots. Which one to use also depends on the platform you're targeting. If you're targeting mobile, you need touch support, and you need to consider multi-touch effects in your implementation - it's not just mouse clicks and drags anymore, but potentially lots of fingers simultaneously!
A nice demo using hammer.js can be found here: http://riagora.com/mobile/hammer/index2.html. Drag and Drop works both with mouse and touch, but you need a touch screen for zooming (by pinching) and rotation of images.
Another demo, a puzzle using scriptaculous, which does almost what you ask for. However, the current version, 1.9.0, is from December 23, 2010, so it doesn't seem to be actively developed anymore.
Bit of background
I've been producing a Flash-driven webcomic for three years now, incorporating some basic animation, a synced soundtrack and zoom-drag page viewing. The recent Flash-bashing, my desire to reach iHandhelds and my preference for open versus proprietary means that I want to make the move to HTML5 techniques this year. In the long-term, I think the writing's on the wall for Adobe's product, and I'm not entirely convinced that's a bad thing.
I'm relatively comfortably with both CSS and HTML, having worked a little in web design before. However, JavaScript is a foreign country to me, and I simply wanted to get some advice as to
whether what I want to achieve can be accomplished consistently across all browsers and
what the best techniques/approaches to the problem would be.
Any advice, even general principles, are very welcome. I've already sought out several HTML5 tutorials and introductions, which lead me to believe that the canvas element will be foundational to my plan; however while all the individual problems I face have been answered by many blogposts and guides, combining the various solutions into a single entity is something I'm not currently able to figure out, as I'm not certain of the limitations of the new HTML5 tags, or of best practice.
If I'm successful in achieving what I'm after, I'm going to post the full code online with an explanation of all the elements. Webcomics might not be a huge domain, but having a resource that did this would have made my life a lot easier - hopefully it'll help someone else in a similar position.
What I'm after
Here's a diagram giving the basics of the design requirements. I'll explain the elements, and the desired extras, below.
(Perhaps the simplest way to demonstrate what I'm after would be for interested folks head over to my website and see how my comic currently works. This isn't a plug - it would simply give the quickest insight.)
At core, I'm after a viewer that will:
display text (SVG image) in a canvas element above an raster image the page's panel art
both images should be zoom-and-draggable in sync but should ideally fade in separately, with the raster image coming first, followed by the SVG image
I'm guessing that the best way to accomplish this would be to layer two canvas elements one above the other using z-index, with the SVG file in the uppermost element. These could then be nested, as in the diagram, within a div element that would carry the zoom-drag function. Is this a reasonable approach, or are there more efficient options?
The next and previous buttons are self-explanatory. Would it best to have each page (bearing in mind some will involve animation and music) on a separate page, or to have all pages within a chapter on a single page, with the buttons making them visible progressively? I imagine this would have a great impact on loading speeds.
Finally, I'd like to have the viewer capable of displaying fullscreen if the reader desires. I imagine this could be accomplished by using Javascript to make the canvas elements and their surrounding div switch between different CSS giving a px-defined size and 100% height and width. Is this a good approach? Is it possible to apply the size change to the div element only and have the canvas elements automatically follow suit, possibly by defining their size via % in CSS?
Desired extras
At various points in the comic I make use of basic animation techniques - simple movements of layered raster images across the viewing pane. This would be simple to accomplish, I imagine, using Javascript; am I correct in thinking that applying overflow:hidden to the wrapping div will prevent images larger than the viewing area from spilling outside the viewer area?
I also want to synchronise audio with some of these animations. I understand that synchronising canvas events with the audio would be the best way to do this on, permitting both to begin activity only upon page loading or next button click.
That's about everything. As said, any advice at whatever level would be greatly appreciated, even if it's 'yes' or 'no' to the various questions I've asked. At root, it would also be good to know if HTML5 is the best option for what I'm after or whether (with gritted teeth) I should stick to Flash for now and go after handhelds using Adobe AIR.