Libgdx AtlasTmxMapLoader with multiple tilsets - libgdx

I am working on a Libgdx game which loads Tiled maps. The current map I am working on makes use of 2 tilesets, one for shadow/light and another for terrain and buildings. The general process I do, that has been working fine, is that I receive the sprite sheet from the artist, design the maps, then take the spritesheet file and split it using ImageMagick. From there I take the split images and create an optimized png and atlas file with TexturePacker.
However, this is the first map I have made that makes use of multiple tilesets. The issue I am having is when loading the map with AtlasTmxMapLoader it relies on a single atlas file property in the map. My shadows and lighting are split into a separate image and atlas and Id rather not merge them all into one in Tiled (and have to re-do a portion of the map).
Perhaps I am missing something simple. How can I handle multiple tilesets?

So after reading more into how .tmx files are read I was able to fix my problem.
Here is how to properly do it when working with multiple tilesets and re-packing your spritesheets in TexturePacker. First, cut up the tileset images using a utility like ImageMagick and make sure they are indexed (specified by an underscore and number in the filename). You can do this with the crop command in ImageMagick like so:
convert.exe "shrine_tileset.png" -crop 16x16 "shrine_tileset_%02d.png"
Second, re-pack all tiles from all tilesets into a single atlas in TexturePacker. If it works correctly you will see the name of each tileset in the atlas file with an associated index based on the tile id. For example:
shrine_tileset
rotate: false
xy: 382, 122
size: 16, 16
orig: 16, 16
offset: 0, 0
index: 703
Finally (and this is the part I could not figure out), make sure each tileset's tile indexes start from the "firstgid" value in the .tmx file. For example, my second tilesheet starts from 2049, as their are 2048 tiles in the first sheet. This should be denoted at the top of the .tmx file for each tileset.
<tileset firstgid="2049" source="shadow_light.tsx"/>
So when cutting up the tiles for my tileset "shadow_light", I would start them from index 2048, one less than the gid, EX: "shadow_light_2048.png".
Hopefully this helps someone!

I am no LibGDX expert but almost all tilemap renderers I've seen rely on working with 1 tileset. The reason is that they are rendered using OpenGL. The renderer sets the texture and draws all tiles with 1 draw call. You can't switch textures in between.
The best way would be to create 2 (or more) separate layers. Each layer uses 1 tileset. E.g. 1 for the background, 1 for the shadows, 1 for the foreground (e.g. walls).

This issue is fixed in 1.9.11. If you are using an earlier version you can override AtlasTmxMapLoader with a fix.
MyAtlasTmxMapLoader.Java
import com.badlogic.gdx.files.FileHandle;
import com.badlogic.gdx.maps.ImageResolver;
import com.badlogic.gdx.maps.MapProperties;
import com.badlogic.gdx.maps.tiled.AtlasTmxMapLoader;
import com.badlogic.gdx.maps.tiled.TiledMapTile;
import com.badlogic.gdx.maps.tiled.TiledMapTileSet;
import com.badlogic.gdx.utils.Array;
import com.badlogic.gdx.utils.GdxRuntimeException;
import com.badlogic.gdx.utils.SerializationException;
import com.badlogic.gdx.utils.XmlReader.Element;
public class MyAtlasTmxMapLoader extends AtlasTmxMapLoader {
/**
* Same as AtlasTmxMapLoader, but fixed to get the firstid attribute from the tileset element in the TMX file, not tsx file.
*/
#Override
protected void loadTileSet(Element mapElement, FileHandle tmxFile, ImageResolver imageResolver) {
if (mapElement.getName().equals("tileset")) {
String imageSource = "";
int imageWidth = 0;
int imageHeight = 0;
FileHandle image = null;
Element element = null;
String source = mapElement.getAttribute("source", null);
if (source != null) {
FileHandle tsx = getRelativeFileHandle(tmxFile, source);
try {
element = xml.parse(tsx);
Element imageElement = element.getChildByName("image");
if (imageElement != null) {
imageSource = imageElement.getAttribute("source");
imageWidth = imageElement.getIntAttribute("width", 0);
imageHeight = imageElement.getIntAttribute("height", 0);
image = getRelativeFileHandle(tsx, imageSource);
}
} catch (SerializationException e) {
throw new GdxRuntimeException("Error parsing external tileset.");
}
} else {
Element imageElement = mapElement.getChildByName("image");
if (imageElement != null) {
imageSource = imageElement.getAttribute("source");
imageWidth = imageElement.getIntAttribute("width", 0);
imageHeight = imageElement.getIntAttribute("height", 0);
image = getRelativeFileHandle(tmxFile, imageSource);
}
}
String name = element.get("name", null);
// Get the firstid attribute from the tileset element in the TMX file, not tsx file.
int firstgid = mapElement.getIntAttribute("firstgid", 1);
int tilewidth = element.getIntAttribute("tilewidth", 0);
int tileheight = element.getIntAttribute("tileheight", 0);
int spacing = element.getIntAttribute("spacing", 0);
int margin = element.getIntAttribute("margin", 0);
Element offset = element.getChildByName("tileoffset");
int offsetX = 0;
int offsetY = 0;
if (offset != null) {
offsetX = offset.getIntAttribute("x", 0);
offsetY = offset.getIntAttribute("y", 0);
}
TiledMapTileSet tileSet = new TiledMapTileSet();
// TileSet
tileSet.setName(name);
final MapProperties tileSetProperties = tileSet.getProperties();
Element properties = element.getChildByName("properties");
if (properties != null) {
loadProperties(tileSetProperties, properties);
}
tileSetProperties.put("firstgid", firstgid);
// Tiles
Array<Element> tileElements = element.getChildrenByName("tile");
addStaticTiles(tmxFile, imageResolver, tileSet, element, tileElements, name, firstgid, tilewidth,
tileheight, spacing, margin, source, offsetX, offsetY, imageSource, imageWidth, imageHeight, image);
for (Element tileElement : tileElements) {
int localtid = tileElement.getIntAttribute("id", 0);
TiledMapTile tile = tileSet.getTile(firstgid + localtid);
if (tile != null) {
addTileProperties(tile, tileElement);
addTileObjectGroup(tile, tileElement);
addAnimatedTile(tileSet, tile, tileElement, firstgid);
}
}
map.getTileSets().addTileSet(tileSet);
}
}
}
And then call:
new MyAtlasTmxMapLoader().load(pathname)
Source: [Tutorial] Using multiple Tilesets with Libgdx and Tiled

Related

Need help rendering a JMapFrame froma cropped geotiff

I'm new to geotools and somewhat new to Java. I have created a program that can read a geotiff, crop it, and, using JavaFX, render the cropped image into an ImageView. Now, I'd like to add geographical points as layers on the rendered image. I've accomplished creating a MapContent with a title. Where I am having issues, is rendering a JMapFrame to test the data is being passed. I am trying to create and add a GridCoverageLayer of the cropped image. I cannot get the JMapFrame to render the image, it appears to be stuck in a loop. I am suspecting the issue is setting the Style of the Layer to NULL. If this is the issue, how do I create a raster based Style? I've tried reading the Geotools API and tutorials, and I just can't make heads or tails half the time....
My ultimate goal is render the map with symbols with JavaFX instead of AWT.
import org.geotools.coverage.grid.GridCoverage2D;
import org.geotools.coverage.processing.CoverageProcessor;
import org.geotools.gce.geotiff.GeoTiffReader;
import org.geotools.geometry.GeneralDirectPosition;
import org.geotools.geometry.GeneralEnvelope;
import org.geotools.geometry.jts.ReferencedEnvelope;
import org.geotools.map.GridCoverageLayer;
import org.geotools.map.Layer;
import org.geotools.map.MapContent;
import org.geotools.referencing.GeodeticCalculator;
import org.geotools.swing.JMapFrame;
import org.geotools.util.factory.Hints;
import org.opengis.parameter.ParameterValueGroup;
import org.opengis.referencing.crs.CoordinateReferenceSystem;
import org.opengis.referencing.operation.TransformException;
import java.io.File;
import java.io.IOException;
public class Processor {
private static void getImage(File file, double NE_lon, double NE_lat, double SW_lon, double SW_lat) throws IOException, TransformException{
//Create the coverage processor and create the crop operation
final CoverageProcessor processor = new CoverageProcessor();
final ParameterValueGroup param = processor.getOperation("CoverageCrop").getParameters();
//Read the TIFF, create the coverage/grid, get the CRS, and get the image envelope
GeoTiffReader reader = new GeoTiffReader(file, new Hints(Hints.FORCE_LONGITUDE_FIRST_AXIS_ORDER,Boolean.TRUE));
GridCoverage2D coverage = reader.read(null);
CoordinateReferenceSystem inCRS = coverage.getCoordinateReferenceSystem();
GeneralEnvelope inEnvelope = (GeneralEnvelope) coverage.getEnvelope();
//Get the image envelope min/max coordinates
GeneralDirectPosition inMaxDP = (GeneralDirectPosition) inEnvelope.getUpperCorner();
GeneralDirectPosition inMinDP = (GeneralDirectPosition) inEnvelope.getLowerCorner();
//Calculate the crop cartesian min/max coordinates
GeodeticCalculator calc = new GeodeticCalculator(inCRS);
calc.setStartingGeographicPoint(NE_lon,NE_lat);
GeneralDirectPosition cropMaxDP = (GeneralDirectPosition) calc.getStartingPosition();
calc.setStartingGeographicPoint(SW_lon,SW_lat);
GeneralDirectPosition cropMinDP = (GeneralDirectPosition) calc.getStartingPosition();
//Output to console the original and cropped cartesian min/max coordinates
System.out.println("Coordinate system: ");
System.out.println("NE (max) corner (meters from meridian (x), origin (y): "+inMaxDP);
System.out.println("SW (min) corner (meters from meridian (x), origin (y): "+inMinDP);
System.out.println();
System.out.println("NE (max) trim corner (lon,lat): "+NE_lon+","+NE_lat);
System.out.println("SW (min) trim corner (lon,lat): "+SW_lon+","+SW_lat);
System.out.println("NE (max) trim corner (meters from meridian (x), origin (y): "+cropMaxDP);
System.out.println("SW (min) trim corner (meters from meridian (x), origin (y): "+cropMinDP);
System.out.println();
//Create the crop envelope size and crop the image envelope
final ReferencedEnvelope crop = new ReferencedEnvelope(
cropMinDP.getOrdinate(0),
cropMaxDP.getOrdinate(0),
cropMinDP.getOrdinate(1),
cropMaxDP.getOrdinate(1),
inCRS);
//Set the Processor to look at the Coverage2D image and crop to the ReferenceEnvelope set
param.parameter("Source").setValue( coverage );
param.parameter("Envelope").setValue( crop );
GridCoverage2D cropCoverage = (GridCoverage2D) processor.doOperation(param);
//Create a Map with layers
MapContent map = new MapContent();
map.setTitle("Detroit");
Layer coverageLayer = new GridCoverageLayer(cropCoverage,null,"Background");
map.addLayer(coverageLayer);
JMapFrame.showMap(map);
//Generate a BufferedImage of the GridCoverage2D
// PlanarImage croppedRenderedImageImage = (PlanarImage) cropCoverage.getRenderedImage();
// BufferedImage image = croppedRenderedImageImage.getAsBufferedImage();
// System.out.println("Image type: "+image.getType());
// System.out.println("Image height: "+image.getHeight());
// System.out.println("Image width: "+image.getWidth());
//Write crop to file system
/*File outFile = new File("/home/greg/Software_Projects/JavaProjects/charts/Detroit_98/Detroit_SEC_98.tif");
GeoTiffWriter writer = new GeoTiffWriter(outFile,new Hints(Hints.FORCE_LONGITUDE_FIRST_AXIS_ORDER,Boolean.TRUE));
writer.write(cropped,null);*/
}
public static void main(String[] args) throws IOException, TransformException {
File inFile = new File("/home/greg/Software_Projects/JavaProjects/charts/Detroit_98/Detroit SEC 98.tif");
getImage(inFile,-81,42,-82.5,41);
}
}
EDIT
GdalInfo for image
Image Structure Metadata: INTERLEAVE=BAND
Corner Coordinates: Upper Left ( -84165.569, -73866.808) ( 82d 0'29.23"W, 41d29'48.29"N)
Lower Left ( -84165.569, -129071.257) ( 82d 0' 0.36"W, 40d59'59.09"N)
Upper Right ( -41702.491, -73866.808) ( 81d29'58.28"W, 41d30' 0.88"N)
Lower Right ( -41702.491, -129071.257) ( 81d29'43.98"W, 41d 0'11.57"N)
Center ( -62934.030, -101469.032) ( 81d45' 2.94"W, 41d15' 0.94"N)
Band 1 Block=1003x1 Type=Byte, ColorInterp=Palette NoData Value=0 Color Table (RGB with 256 entries)
Update
Your tiff is not a simple raster, it contains a paletted image (ColorInterp=Palette) so each pixel contains a single byte between 0-255 which maps to a colour. So you aim to symbolize the image will not work as there is not a linear relationship between pixel values and colours. To display this image in GeoTools you need an empty RasterSymbolizer which is what the createGreyscaleStyle() method does. I've tested it with a paletted image and it works fine for me (note bands count from 1 and you only have one band).
private Style createGreyscaleStyle(int band) {
ContrastEnhancement ce = new ContrastEnhancementImpl();
SelectedChannelType sct = sf.createSelectedChannelType(String.valueOf(band), ce);
RasterSymbolizer sym = sf.getDefaultRasterSymbolizer();
ChannelSelection sel = sf.channelSelection(sct);
sym.setChannelSelection(sel);
return SLD.wrapSymbolizers(sym);
}
Section 4 of the Image Tutorial shows how to create a colour raster SLD - you can't just use a NULL style as GeoTools will now have no idea of how to convert the bands to an image. There is a fuller description of possible RasterSymbolizer options in the SLD reference in the GeoServer manual. Alternatively, you can import an SLD file containing the style.
/**
* This method examines the names of the sample dimensions in the provided coverage looking for
* "red...", "green..." and "blue..." (case insensitive match). If these names are not found it
* uses bands 1, 2, and 3 for the red, green and blue channels. It then sets up a raster
* symbolizer and returns this wrapped in a Style.
*
* #return a new Style object containing a raster symbolizer set up for RGB image
*/
private Style createRGBStyle() {
GridCoverage2D cov = null;
try {
cov = reader.read(null);
} catch (IOException giveUp) {
throw new RuntimeException(giveUp);
}
// We need at least three bands to create an RGB style
int numBands = cov.getNumSampleDimensions();
if (numBands < 3) {
return null;
}
// Get the names of the bands
String[] sampleDimensionNames = new String[numBands];
for (int i = 0; i < numBands; i++) {
GridSampleDimension dim = cov.getSampleDimension(i);
sampleDimensionNames[i] = dim.getDescription().toString();
}
final int RED = 0, GREEN = 1, BLUE = 2;
int[] channelNum = {-1, -1, -1};
// We examine the band names looking for "red...", "green...", "blue...".
// Note that the channel numbers we record are indexed from 1, not 0.
for (int i = 0; i < numBands; i++) {
String name = sampleDimensionNames[i].toLowerCase();
if (name != null) {
if (name.matches("red.*")) {
channelNum[RED] = i + 1;
} else if (name.matches("green.*")) {
channelNum[GREEN] = i + 1;
} else if (name.matches("blue.*")) {
channelNum[BLUE] = i + 1;
}
}
}
// If we didn't find named bands "red...", "green...", "blue..."
// we fall back to using the first three bands in order
if (channelNum[RED] < 0 || channelNum[GREEN] < 0 || channelNum[BLUE] < 0) {
channelNum[RED] = 1;
channelNum[GREEN] = 2;
channelNum[BLUE] = 3;
}
// Now we create a RasterSymbolizer using the selected channels
SelectedChannelType[] sct = new SelectedChannelType[cov.getNumSampleDimensions()];
ContrastEnhancement ce = sf.contrastEnhancement(ff.literal(1.0), ContrastMethod.NORMALIZE);
for (int i = 0; i < 3; i++) {
sct[i] = sf.createSelectedChannelType(String.valueOf(channelNum[i]), ce);
}
RasterSymbolizer sym = sf.getDefaultRasterSymbolizer();
ChannelSelection sel = sf.channelSelection(sct[RED], sct[GREEN], sct[BLUE]);
sym.setChannelSelection(sel);
return SLD.wrapSymbolizers(sym);
}
}

How to find the associated polygon from pixel canvas HTML5

I am working on a HTML5 canvas application. I am now able to find the co ordinates and the hex value of any point I am clicking over the canvas.
If suppose I am clicking an area which have a filled polygon (and I know the color of the polygon). Is there any way or algorithm to return the enclosing Co-ordinates which drew the polygon ??
Solution
Store the polygon points in objects or array and use the canvas method:
var boolean = context.isPointInPath(x, y);
By storing the points as objects/arrays you can simply iterate through the collection to re-build the path for each polygon and then call the method above (no need to redraw the polygon for checks).
If you have a hit then you know which object it is as it would be the current. You can store other meta data on these objects such as color.
Example
You can make a simple class to store coordinates as well as color (a basic objectyou can extend as needed):
function Polygon(pts, color) {
this.points = pts;
this.color = color;
return this;
}
Then define some points:
/// object collection
var myPolygons = [];
/// add polygons to it
myPolygons.push( new Polygon([[10,10], [30,30], [70,70]], 'black') );
myPolygons.push( new Polygon([[50,40], [70,80], [30,30]], 'red') );
Render the polygons
Then render the polygons:
for(var i = 0, p, points; p = myPolygons[i]; i++) {
points = p.points;
ctx.beginPath();
ctx.moveTo(points[0][0], points[0][1]);
for(var t = 1; t < points.length; t++) {
ctx.lineTo(points[t][0], points[t][1]);
}
ctx.closePath();
ctx.fillStyle = p.color;
ctx.fill();
}
Check if the point is in the path
Then check for hit based on position x, y when you do a click.
As you can see the function is almost identical to the render function (and of course you can refactor them into a single function) but this doesn't actually draw anything, only re-builds the path for check.
for(var i = 0, p, points; p = myPolygons[i]; i++) {
points = p.points;
ctx.beginPath();
ctx.moveTo(points[0][0], points[0][1]);
for(var t = 1; t < points.length; t++) {
ctx.lineTo(points[t][0], points[t][1]);
}
ctx.closePath();
/// instead of drawing anything, we check:
if (ctx.isPointInPath(x, y) === true) {
alert('Hit: ' + p.color);
return;
}
}
Conclusion
Typically this will turn out to be faster than iterating the bitmap array (which is also is a perfectly good solution). This is mainly because the checks are done internally in compiled code.
In the future we will have access to building Path objects ourselves and with that we can simply pass a single Path object already holding all the path info instead of re-building it, which means even higher speed.

Processing.js - Drawing onto an Image

Using the HTML5 Canvas element and Processing.js, I want to draw circle outlines onto an image. However, when I run my code, the area inside the circle outlines simply turns white, instead of retaining the image information in that region. I suspect that this is an image loading problem, but I've included the comment:
/* #pjs preload="myImage.jpg"; */
that I've read will fix that. My entire .pde file is below:
/* #pjs preload="myImage.jpg"; */
// Global Variables
PImage img;
// Data Storage
ArrayList cent_x;
ArrayList cent_y;
ArrayList cent_r;
void setup() {
// Establish Canvas
size(760,560);
// Load Image
img = loadImage("myImage.jpg");
// Initialize Data Structures
cent_x = new ArrayList();
cent_y = new ArrayList();
cent_r = new ArrayList();
}
void draw() {
// Draw Background Image
image(img,0,0,width,height);
// Add to Data Structures
if (mousePressed) {
cent_x.add(mouseX);
cent_y.add(mouseY);
cent_r.add(15);
}
// Draw all Marks
for (int i = 0; i<cent_x.size(); i++) {
int c_x = cent_x.get(i);
int c_y = cent_y.get(i);
int c_r = cent_r.get(i);
stroke(255,255,0);
ellipse(c_x,c_y,c_r,c_r);
}
}
Any thoughts on why that may be happening?
You need to call noFill(); before calling ellipse(). That should do it

as3 different between cacheAsBitmap vs CachedSprite

I have cacheAsBitmap = true
and the following class
package utility{
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.display.Sprite;
import flash.geom.Matrix;
import flash.geom.Rectangle;
import flash.utils.getQualifiedClassName;
public class CachedSprite extends Sprite {
//Declare a static data cache
protected static var cachedData:Object = { };
public var clip:Bitmap;
public function CachedSprite(asset:Class, centered:Boolean = false, scale:int = 2) {
//Check the cache to see if we've already cached this asset
var data:BitmapData = cachedData[getQualifiedClassName(asset)];
if (!data) {
// Not yet cached. Let's do it now
// This should make "Class", "Sprite", and "Bitmap" data types all work.
var instance:Sprite = new Sprite();
instance.addChild(new asset());
// Get the bounds of the object in case top-left isn't 0,0
var bounds:Rectangle = instance.getBounds(this);
// Optionally, use a matrix to up-scale the vector asset,
// this way you can increase scale later and it still looks good.
var m:Matrix = new Matrix();
m.translate(-bounds.x, -bounds.y);
m.scale(scale, scale);
// This shoves the data to our cache. For mobiles in GPU-rendering mode,
// also uploads automatically to the GPU as a texture at this point.
data = new BitmapData(instance.width * scale, instance.height * scale, true, 0x0);
data.draw(instance, m, null, null, null, true); // final true enables smoothing
cachedData[getQualifiedClassName(asset)] = data;
}
// This uses the data already in the GPU texture bank, saving a draw/memory/push call:
clip = new Bitmap(data, "auto", true);
// Use the bitmap class to inversely scale, so the asset still
// appear to be it's normal size
clip.scaleX = clip.scaleY = 1 / scale;
addChild(clip);
if (centered) {
// If we want the clip to be centered instead of top-left oriented:
clip.x = clip.width / -2;
clip.y = clip.height / -2;
}
// Optimize mouse children
mouseChildren = false;
}
public function kill():void {
// Just in case you want to clean up things the manual way
removeChild(clip);
clip = null;
}
}
}
Is there anyone can explain to me the different? Why do i need to implement this class rather than just use cacheAsBitmap = true? Thanks
To avoid redrawing the DisplayObject if moved, you can set the cacheAsBitmap property. If set to true, Flash runtimes cache an internal bitmap representation of the display object.
The cacheAsBitmap property is automatically set to true whenever you apply a filter to a display object. Best used with display objects that have mostly static content and that do not scale, rotate, or change alpha frequently, bitmap data must recalculated for all operations beyond horizontal or vertical movement.
Caching the bitmap yourself empowers control of the rendering lifecycle.
In your CachedSprite class, what's actually added to the display list is a Bitmap, versus adding your original display object. Any interaction with input devices must be applied to the cached sprite instance.
The main difference seems to be on this line:
var data:BitmapData = cachedData[getQualifiedClassName(asset)];
This class keeps a static reference to any previously cached bitmaps. If you have two instances of CachedSprite that are showing the same bitmap data (like a particle, for example) this class will only use one instance of BitmapData, saving memory.

Duplicated a swf loaded with LoaderMax

i'm trying to duplicate a swf loaded using greensocks LoaderMax but i don't seem to be able to
i'm using the following code:
private var _assetCache : Dictionary;
public function getAssetById(assetId:String):DisplayObject
{
var asset:DisplayObject;
if (!_assetCache[assetId])
{
_assetCache[assetId] = LoaderMax.getContent(assetId).rawContent;
}
asset = _assetCache[assetId];
// duplicate bitmap
if (asset is Bitmap)
{
var bmd:BitmapData = new BitmapData(asset.width, asset.height, true, 0);
return new Bitmap(bmd, "auto", true);
}
// otherwise return
return SpriteUtils.duplicateDisplayObject(asset);
//return asset; // if previous line is commented out, this works well but there will be only a single copy of the loaded swf, kinda negating the use of a cache
}
and this is SpriteUtils.duplicateDisplayObject(asset) (taken from this
static public function duplicateDisplayObject(target:DisplayObject, autoAdd:Boolean = false):DisplayObject
{
// create duplicate
var targetClass:Class = Object(target).constructor;
var duplicate:DisplayObject = new targetClass();
trace(duplicate, duplicate.height); // traces [MovieClip, 0]
// duplicate properties
duplicate.transform = target.transform;
duplicate.filters = target.filters;
duplicate.cacheAsBitmap = target.cacheAsBitmap;
duplicate.opaqueBackground = target.opaqueBackground;
if (target.scale9Grid)
{
var rect:Rectangle = target.scale9Grid;
// WAS Flash 9 bug where returned scale9Grid is 20x larger than assigned
// rect.x /= 20, rect.y /= 20, rect.width /= 20, rect.height /= 20;
duplicate.scale9Grid = rect;
}
// add to target parent's display list
// if autoAdd was provided as true
if (autoAdd && target.parent)
{
target.parent.addChild(duplicate);
}
return duplicate;
}
if i simply return the asset from _assetCache (which is a dictionary) without duplicating it, it works and traces as a MovieClip but when i try to duplicate it, even though the traces tell me that the duplicate is a movieclip. Note the clip being loaded is a simple vector graphic on the stage of the root of the timeline
thanks in advance
obie
I don't think simply calling the constructor will work.
Try doing this (I've assumed certain previous knowledge since you're able to get rawContent above):
1) Load the data in using DataLoader in binary mode.... in the context of LoaderMax it would be like: swfLoader = new LoaderMax({name: "swfLoader", onProgress:swfProgressHandler, onChildComplete: swfLoaded, auditSize:false, maxConnections: 2}); swfLoader.append(new DataLoader(url, {format: 'binary'})); (the main point is to use DataLoader with format='binary')
2) Upon complete, store the ByteArray which this returns into your dictionary. The first part of your above code will basically be unchanged, though the bytearray might be in content rather than rawContent
3) Whenever you need a duplicate copy, use Loader.loadBytes() on the ByteArray. i.e. var ldr:Loader = new Loader(); ldr.contentLoaderInfo.addEventListener(Event.COMPLETE, swfReallyLoaded); ldr.loadBytes(_assetCache[assetId]);
As with all loading, you might need to set a LoaderContext, and if in AIR- allowLoadBytesCodeExecution = true; allowCodeImport = true;