I am creating a game for Android using LibGDX, the task is to insert a profile photo from the gallery?
Kotlin SOLUTION 2022
In the main activity we write the following code to get an image from the gallery:
class MainActivity : AppCompatActivity(), AndroidFragmentApplication.Callbacks {
private var blockImageFromGalleryResult: (Uri?) -> Unit = {}
private val selectImageFromGalleryResult = registerForActivityResult(ActivityResultContracts.GetContent()) {
uri: Uri? -> blockImageFromGalleryResult(uri)
}
fun selectImageFromGallery(block: (Uri?) -> Unit) {
blockImageFromGalleryResult = block
selectImageFromGalleryResult.launch("image/*")
}
On the screen where the texture will be, it is determined that when you click on the photo, the gallery will open using the method that we wrote in the main activity, this method will return the uri when choosing an image, we will convert this yuri to a bitmap and it to a texture:
class MenuScreen: AdvancedScreen(1280f, 727f) {
private val photoImage = Image(SpriteManager.MenuRegion.PHOTO.region)
private fun AdvancedStage.addPhoto() {
addActor(photoImage)
photoImage.apply {
setBounds(LM.photo)
toClickable().setOnClickListener {
MainActivity().selectImageFromGallery {
it?.let { uri ->
val bitmap: Bitmap = game.activity.contentResolver.openInputStream(uri).use { data -> BitmapFactory.decodeStream(data) }
runGDX {
val tex: Texture = Texture(bitmap.width, bitmap.height, Pixmap.Format.RGBA8888)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, tex.textureObjectHandle)
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0)
bitmap.recycle()
photoImage.drawable = TextureRegionDrawable(tex)
}
}
}
}
}
}
That's all, for more immersion in LibGDX for Android, you can read my article:
https://medium.com/me/stats/post/4858e26734cf
PS. Vel_daN: Love what You DO 💚.
Related
Hello :) in my Project I want to get Audio track from a JSON file file which locate it in a Server.
my Audio. mp4 doesn't want to play, I don't know why ,, I made the same script for video and it works good just I thought if I have only the sound so it can work the same way as a video. this is my script:
using UnityEngine;
using UnityEngine.Networking;
using UnityEngine.UI;
using System.Collections;
using TMPro;
using UnityEngine.Video;
// Json dataaudio format
/*
{
"Title" : "..." ,
"AudioURL" : "..."
}
*/
public struct Dataudio
{
public string Title;
public string AudioURL;
}
public class getaudio : MonoBehaviour
{
[SerializeField] TextMeshPro TitleText;
[SerializeField] private VideoPlayer videoPlayer;
[SerializeField] private RawImage rawImage;
string jsonURL = "https://myserver";
IEnumerator Start()
{
using (var request = UnityWebRequest.Get(jsonURL))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError)
{
// error ...
}
else
{
// success...
Dataaudio data = JsonUtility.FromJson<Dataaudio>(request.downloadHandler.text);
// print data in UI
uiTitleText.text = data.Title;
// The video player will take care of loading the video clip all by itself!
videoPlayer.url = data.AudioURL;
videoPlayer.Play();
}
}
}
}
I saw the Audio Source Game object but it doesn't support URL as Video player game object.
I hope someone help me. Thank you
By using a second UnityWebRequestMultimedia.GetAudioClip e.g.
IEnumerator Start()
{
Dataaudio data;
using (var request = UnityWebRequest.Get(jsonURL))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError)
{
// error ...
yield break;
}
else
{
// success...
data = JsonUtility.FromJson<Dataaudio>(request.downloadHandler.text);
}
}
// print data in UI
uiTitleText.text = data.Title;
using (var clipRequest = UnityWebRequestMultimedia.GetAudioClip(data.AudioURL, AudioType.WAV /*TODO use correct audio type here*/))
{
yield return clipRequest.SendWebRequest();
if (clipRequest.isNetworkError || clipRequest.isHttpError)
{
// error ...
yield break;
}
else
{
// success...
var clip = DownloadHandlerAudioClip.GetContent(www);
someAudioSource.clip = clip;
someAudioSource.Play();
}
}
}
Note though that .mp4 is usually a video format, not a sound format.
I wanna get the gif through DataSubscriber by Fresco.But when i get the CloseableAnimatedImage, I don't know how to get the bitmap of it.
public void getBitmap(String url, final OnBitmapFetchedListener listener) {
ImageRequest request = ImageRequest.fromUri(Uri.parse(url));
final ImagePipeline imagePipeline = Fresco.getImagePipeline();
DataSource<CloseableReference<CloseableImage>>
dataSource = imagePipeline.fetchDecodedImage(request, null);
DataSubscriber dataSubscriber = new BaseDataSubscriber<CloseableReference<CloseableImage>>() {
#Override
protected void onNewResultImpl(DataSource<CloseableReference<CloseableImage>> closeableReferenceDataSource) {
CloseableReference<CloseableImage> imageReference = closeableReferenceDataSource.getResult();
if (imageReference != null) {
try {
CloseableImage image = imageReference.clone().get();
if (image instanceof CloseableAnimatedImage) {
//here i get the gif but i don't know how to get the bitmap
}
}
}
}
and i tried another way to get the bitmap of a pic:
fun getBitmap(uri: Uri, listener: OnBitmapFetchedListener) {
val request = ImageRequest.fromUri(uri)
val imagePipeline = Fresco.getImagePipeline()
val dataSource = imagePipeline.fetchEncodedImage(request, null)
val dataSubscriber = object : BaseDataSubscriber<CloseableReference<PooledByteBuffer>>() {
override fun onNewResultImpl(closeableReferenceDataSource: DataSource<CloseableReference<PooledByteBuffer>>) {
val imageReference = closeableReferenceDataSource.result
if (imageReference != null) {
try {
val image = imageReference.clone().get()
val inputStream = PooledByteBufferInputStream(image)
val imageFormat = ImageFormatChecker.getImageFormat(inputStream)
Log.e("ImageUtil", "imageFormat = ${ImageFormat.getFileExtension(imageFormat)}")
val bitmap = BitmapFactory.decodeStream(inputStream)
listener.onSuccess(bitmap)
} catch (e: IOException) {
Log.e("ImageUtil", "error:$e")
} finally {
imageReference.close()
}
}
}
override fun onFailureImpl(closeableReferenceDataSource: DataSource<CloseableReference<PooledByteBuffer>>) {
Log.e("ImageUtil", "fail")
listener.onFail()
}
}
It's kotlin code, what i do is using fetchEncodedImage and get the inputStream of a pic.
But it always go onFailImpl(), I don't know why.
It seems that the real question is how to statically display only the first frame of an animated image. This is not supported at the moment, but it would be very easy to implement.
Fresco already has ImageDecodeOptions object. You would just need to add another field there: public final boolean decodeAsStaticImage. Then in ImageDecoder.decodeGif you just need to change:
if (GifFormatChecker.isAnimated(is)) {
return mAnimatedImageFactory.decodeGif(...);
}
return decodeStaticImage(encodedImage);
to:
if (!options.decodeAsStaticImage && GifFormatChecker.isAnimated(is)) {
return mAnimatedImageFactory.decodeGif(...);
}
return decodeStaticImage(encodedImage);
Please feel free to implement this and make a pull request to our GitHub Fresco repository.
And then in the client code, you just need to set your decode options to the image request with ImageRequestBuilder.setImageDecodeOptions.
I am trying to export the vertices, faces and colors of a selected object in the Navis. After a bit of research I found that we can do it using the COM API provided by the Navisworks. I used getprimitives to find the triangles. which looks is something as below.
'using Autodesk.Navisworks.Api;
using Autodesk.Navisworks.Api.Plugins;
using ComBridge = Autodesk.Navisworks.Api.ComApi.ComApiBridge;
using COMApi = Autodesk.Navisworks.Api.Interop.ComApi;
#region InwSimplePrimitivesCB Class
class CallbackGeomListener : COMApi.InwSimplePrimitivesCB
{
public void Line(COMApi.InwSimpleVertex v1,
COMApi.InwSimpleVertex v2)
{
// do your work
}
public void Point(COMApi.InwSimpleVertex v1)
{
// do your work
}
public void SnapPoint(COMApi.InwSimpleVertex v1)
{
// do your work
}
public void Triangle(COMApi.InwSimpleVertex v1,
COMApi.InwSimpleVertex v2,
COMApi.InwSimpleVertex v3)
{
// do your work
}
}
#endregion
#region NW Plugin
[PluginAttribute("Test","ADSK",DisplayName= "Test")]
[AddInPluginAttribute(AddInLocation.AddIn)]
public class Class1:AddInPlugin
{
public override int Execute(params string[] parameters)
{
// get the current selection
ModelItemCollection oModelColl =
Autodesk.Navisworks.Api.Application.
ActiveDocument.CurrentSelection.SelectedItems;
//convert to COM selection
COMApi.InwOpState oState = ComBridge.State;
COMApi.InwOpSelection oSel =
ComBridge.ToInwOpSelection(oModelColl);
// create the callback object
CallbackGeomListener callbkListener =
new CallbackGeomListener();
foreach (COMApi.InwOaPath3 path in oSel.Paths())
{
foreach (COMApi.InwOaFragment3 frag in path.Fragments())
{
// generate the primitives
frag.GenerateSimplePrimitives(
COMApi.nwEVertexProperty.eNORMAL,
callbkListener);
}
}
return 0;
}
}
#endregion
'
The problem here is I am not able to find faces, I do get vertices by using triangle method.
Please let me know if anyone knows how to find faces and color of the object.
I am using a ValueConverter to get the thumbnail for an m4 file that was recorded by directly with WinRT's MediaCapture. After much debugging and alternate approaches, I've settle on the converter code below. I am getting the following error The component cannot be found. (Exception from HRESULT: 0x88982F50) on the GetThumbnailAsync method.
I have confirmed that the thumbnail is being shown for the video in the Xbox Video app and the file explorer app when I use CopyTo(KnownFolders.VideosLibrary).
The converter seems to work fine when it's an external video file, but not with one of my app's mp4s. Is there something wrong with my converter or can you reproduce this?
SEE UPDATE 1 I try to get the thumbnail when the file is first created, same error occurs.
public class ThumbnailToBitmapImageConverter : IValueConverter
{
readonly StorageFolder localFolder = ApplicationData.Current.LocalFolder;
BitmapImage image;
public object Convert(object value, Type targetType, object parameter, string language)
{
if (Windows.ApplicationModel.DesignMode.DesignModeEnabled)
return "images/ExamplePhoto2.jpg";
if (value == null)
return "";
var fileName = (string)value;
if (string.IsNullOrEmpty(fileName))
return "";
var bmi = new BitmapImage();
bmi.SetSource(Thumb(fileName).Result);
return bmi;
}
private async Task<StorageItemThumbnail> Thumb(string fileName)
{
try
{
var file = await localFolder.GetFileAsync(fileName)
.AsTask().ConfigureAwait(false);
var thumbnail = await file.GetScaledImageAsThumbnailAsync(ThumbnailMode.ListView, 90, ThumbnailOptions.UseCurrentScale)
.AsTask().ConfigureAwait(false);
return thumbnail;
}
catch (Exception ex)
{
new MessageDialog(ex.Message).ShowAsync();
}
return null;
}
public object ConvertBack(object value, Type targetType, object parameter, string language)
{
throw new NotImplementedException();
}
}
UPDATE 1
I decided to go back to where I save the video to a file and grab the thumbnail there, then save it to an image for use later. I get the same error, here is the code for grabbing and saving the thumbnail after the video is saved:
var thumb = await videoStorageFile.GetThumbnailAsync(ThumbnailMode.ListView);
var buffer = new Windows.Storage.Streams.Buffer(Convert.ToUInt32(thumb.Size));
var thumbBuffer = await thumb.ReadAsync(buffer, buffer.Capacity, InputStreamOptions.None);
using (var str = await thumbImageFile.OpenAsync(FileAccessMode.ReadWrite))
{
await str.WriteAsync(thumbBuffer);
}
I have not tested this out, but It should work. In your model that you are binding to, replace the property for your thumbnail with a new class named Thumbnail. Bind to that property rather than your video location. When the video location changes, create a new thumbnail.
public class VideoViewModel : INotifyPropertyChanged
{
public string VideoLocation
{
get { return _location; }
set
{
_location = value;
Thumbnail = new Thumbnail(value);
OnPropertyChanged();
}
}
public Thumbnail Thumbnail
{
get { return _thumbnail; }
set
{
_thumbnail = value;
OnPropertyChanged();
}
}
}
The Thumbnail class. This is just a shell, ready for you to fill out the rest
public class Thumbnail : INotifyPropertyChanged
{
public Thumbnail(string location)
{
Image = GetThumbFromVideoAsync(location);
}
private Task<BitMapSource> GetThumbFromVideoAsync(string location)
{
BitMapSource result;
// decode
// set it again to force
Image = Task.FromResult(result);
}
public Task<BitMapSource> Image
{
get { return _image; }
private set
{
_image = value;
OnPropertyChanged();
}
}
}
You can still have a value converter in place. It would check if the task has completed, if it has not, then show some default image. If the task has faulted, it can show some error image:
public class ThumbnailToBitmapImageConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, string language)
{
var thumbnail = value as Thumbnail;
if (thumbnail == null) return GetDefaultBitmap();
if (thumbnail.Image.IsCompleted == false) return GetDefaultBitmap();
if (thumbnail.Image.IsFaulted) return GetBadImage();
return thumbnail.Image.Result;
}
public object ConvertBack(object value, Type targetType, object parameter, string language)
{
throw new NotImplementedException();
}
private BitMapSource GetDefaultBitmap()
{
// load a default image
}
private BitMapSource GetBadImage()
{
// load a ! image
}
}
I am a new developer in cocos2dx.I am developing a game in which i am using the box2d bodies that are loading through physics editor.There are more than 20 bodies using in a level of the game,that i am making separate body for separate sprite attached with them and the similar bodies are used in the other levels and there are 50 levels in the game and for each level,i have made separate class and again making the b2body loading function,all the code is working properly but I just want to make a generic function for loading the bodies in a class so that i can use the same b2body loading function in all the levels.Also i have to destroy the particular body and the sprite on touching the sprite
//Sprites:
rect_sprite1=CCSprite::create("rect1.png");
rect_sprite1->setScaleX(rX);
rect_sprite1->setScaleY(rY);
this->addChild(rect_sprite1,1);
rect_sprite2=CCSprite::create("rect2.png");
rect_sprite2->setScaleX(rX);
rect_sprite2->setScaleY(rY);
this->addChild(rect_sprite2,1);
rect_sprite3=CCSprite::create("circle.png");
rect_sprite3->setScale(rZ);
this->addChild(rect_sprite3,1);
GB2ShapeCache::sharedGB2ShapeCache()->addShapesWithFile("obs.plist");
//body loading function
void Level1::addNewSpriteWithCoords()
{
CCSize winSize = CCDirector::sharedDirector()->getWinSize();
b2BodyDef bodyDef1;
bodyDef1.type=b2_dynamicBody;
bodyDef1.position.Set((winSize.width*0.38)/PTM_RATIO,(winSize.height*0.4) /PTM_RATIO);
bodyDef1.userData = rect_sprite1;
body1 = (MyPhysicsBody*)world->CreateBody(&bodyDef1);
body1->setTypeFlag(7);
// add the fixture definitions to the body
GB2ShapeCache *sc1 = GB2ShapeCache::sharedGB2ShapeCache();
sc1->addFixturesToBody(body1,"rect1", rect_sprite1);
rect_sprite1->setAnchorPoint(sc1->anchorPointForShape("rect1"));
b2BodyDef bodyDef2;
bodyDef2.type=b2_dynamicBody;
bodyDef2.position.Set((winSize.width*0.62)/PTM_RATIO,
(winSize.height*0.4)/PTM_RATIO);
bodyDef2.userData = rect_sprite2;
body2 = (MyPhysicsBody*)world->CreateBody(&bodyDef2);
body2->setTypeFlag(7);
// add the fixture definitions to the body
GB2ShapeCache *sc2 = GB2ShapeCache::sharedGB2ShapeCache();
sc2->addFixturesToBody(body2,"rect2", rect_sprite2);
rect_sprite2->setAnchorPoint(sc2->anchorPointForShape("rect2"));
b2BodyDef bodyDef3;
bodyDef3.type=b2_dynamicBody;
bodyDef3.position.Set((winSize.width*0.5)/PTM_RATIO, (winSize.height*0.23)/PTM_RATIO);
bodyDef3.userData = rect_sprite3;
body3 = (MyPhysicsBody*)world->CreateBody(&bodyDef3);
body3->setTypeFlag(7);
// add the fixture definitions to the body
GB2ShapeCache *sc3 = GB2ShapeCache::sharedGB2ShapeCache();
sc3->addFixturesToBody(body3,"circle", rect_sprite3);
rect_sprite3->setAnchorPoint(sc3->anchorPointForShape("circle"));
}
void Level1::ccTouchesBegan(cocos2d::CCSet* touches, cocos2d::CCEvent* event)
{
if(box->containsPoint(touchPoint))
{
this->removeChild(((CCSprite*)rect),true);
if(((CCSprite*)rect)==rect_sprite1)
{
rect_sprite1=NULL;
world->DestroyBody(body1);
Utils::setCount(Utils::getCount()-1);
}
if(((CCSprite*)rect)==rect_sprite2)
{
rect_sprite2=NULL;
world->DestroyBody(body2);
Utils::setCount(Utils::getCount()-1);
}
if(((CCSprite*)rect)==rect_sprite3)
{
rect_sprite3=NULL;
world->DestroyBody(body3);
Utils::setCount(Utils::getCount()-1);
}
}
Similarly i am doing for other levels.
If anyone know about it,please suggest.Thanks
This seems more like a "What design pattern should I use?" than a specific problem with loading the code.
In general, when I want to create "Entities" that require a physical body, I use a base class that contains the Box2D body as one of its members. The base class is a container for the body (which is assigned to it) and is responsible for destroying the body (removing it from the Box2D world) when the Entity is destroyed.
Derived classes can load the body from the Box2D shape cache. This is best shown through an example. There is a game I am working on where I have a swarm of different shaped asteroids circling a sun. Here is a screen shot:
The base class, Entity, contains the body and destroys it when the Entity is destroyed:
class Entity : public HasFlags
{
public:
enum
{
DEFAULT_ENTITY_ID = -1,
};
private:
uint32 _ID;
// Every entity has one "main" body which it
// controls in some way. Or not.
b2Body* _body;
// Every entity has a scale size from 1 to 100.
// This maps on to the meters size of 0.1 to 10
// in the physics engine.
uint32 _scale;
protected:
void SetScale(uint32 value)
{
assert(value >= 1);
assert(value <= 100);
_scale = value;
}
public:
void SetBody(b2Body* body)
{
assert(_body == NULL);
if(_body != NULL)
{
CCLOG("BODY SHOULD BE NULL BEFORE ASSIGNING");
_body->GetWorld()->DestroyBody(_body);
_body = NULL;
}
_body = body;
if(body != NULL)
{
_body->SetUserData(this);
for (b2Fixture* f = _body->GetFixtureList(); f; f = f->GetNext())
{
f->SetUserData(this);
}
}
}
inline void SetID(uint32 ID)
{
_ID = ID;
}
inline uint32 GetID() const
{
return _ID;
}
virtual string ToString(bool updateDescription = false)
{
string descr = "ID: ";
descr += _ID;
descr += "Flags: ";
if(IsFlagSet(HF_IS_GRAPH_SENSOR))
descr += "IS_FLAG_SENSOR ";
return descr;
}
Entity() :
_ID(DEFAULT_ENTITY_ID),
_body(NULL),
_scale(1)
{
}
Entity(uint32 flags, uint32 scale) :
HasFlags(flags),
_ID(DEFAULT_ENTITY_ID),
_body(NULL),
_scale(scale)
{
}
virtual void Update()
{
}
virtual void UpdateDisplay()
{
}
virtual ~Entity()
{
if(_body != NULL)
{
_body->GetWorld()->DestroyBody(_body);
}
}
inline static float32 ScaleToMeters(uint32 scale)
{
return 0.1*scale;
}
inline Body* GetBody()
{
return _body;
}
inline const Body* GetBody() const
{
return _body;
}
inline uint32 GetScale()
{
return _scale;
}
inline float32 GetSizeMeters()
{
return ScaleToMeters(_scale);
}
};
The Asteroid class itself is responsible for loading one of several different "Asteroid" shapes from the shape cache. However, ALL the asteroids have common logic for making them move about the center of the screen. They have a rope joint attached and the Update(...) function adds some "spin" to them so they rotate around the center:
class Asteroid : public Entity
{
private:
b2Fixture* _hull;
Vec2 _anchor;
CCSprite* _sprite;
float32 _targetRadius;
public:
// Some getters to help us out.
b2Fixture& GetHullFixture() const { return *_hull; }
float32 GetTargetRadius() { return _targetRadius; }
CCSprite* GetSprite() { return _sprite; }
void UpdateDisplay()
{
// Update the sprite position and orientation.
CCPoint pixel = Viewport::Instance().Convert(GetBody()->GetPosition());
_sprite->setPosition(pixel);
_sprite->setRotation(-CC_RADIANS_TO_DEGREES(GetBody()->GetAngle()));
}
virtual void Update()
{
Body* body = GetBody();
Vec2 vRadius = body->GetPosition();
Vec2 vTangent = vRadius.Skew();
vTangent.Normalize();
vRadius.Normalize();
// If it is not moving...give it some spin.
if(fabs(vTangent.Dot(body->GetLinearVelocity())) < 1)
{
body->SetLinearDamping(0.001);
body->ApplyForceToCenter(body->GetMass()*1.5*vTangent);
body->ApplyForce(vRadius,body->GetMass()*0.05*vRadius);
}
else
{
body->SetLinearDamping(0.05);
}
}
~Asteroid()
{
}
Asteroid() :
Entity(HF_CAN_MOVE | HF_UPDATE_PRIO_5,50)
{
}
bool Create(b2World& world, const string& shapeName,const Vec2& position, float32 targetRadius)
{
_targetRadius = targetRadius;
_anchor = position;
string str = shapeName;
str += ".png";
_sprite = CCSprite::createWithSpriteFrameName(str.c_str());
_sprite->setTag((int)this);
_sprite->setAnchorPoint(ccp(0.5,0.5));
// _sprite->setVisible(false);
b2BodyDef bodyDef;
bodyDef.position = position;
bodyDef.type = b2_dynamicBody;
Body* body = world.CreateBody(&bodyDef);
assert(body != NULL);
// Add the polygons to the body.
Box2DShapeCache::instance().addFixturesToBody(body, shapeName, GetSizeMeters());
SetBody(body);
return true;
}
};
The Create(...) function for the Asteroid is called from the loading code in the scene, which could easily be a .csv file with the shape names in it. It actually reuses the names several times (there are only about 10 asteroid shapes).
You will notice that the Asteroid also has a CCSprite associated with it. Not all Entities have a sprite, but some do. I could have created an Entity-Derived class (EntityWithSprite) for this case so that the sprite could be managed as well, but I try to avoid too many nested classes. I could have put it into the base class, and may still. Regardless, the Asteroids contain their own sprites and load them from the SpriteCache. They are updated in a different part of the code (not relevant here, but I will be happy to answer questions about it if you are curious).
NOTE: This is part of a larger code base and there are features like zooming, cameras, graph/path finding, and lots of other goodies in the code base. Feel free to use what you find useful.
You can find all the (iOS) code for this on github and there are videos and tutorials posted on my site here.