How to use Google Cloud services for a HTML5 game? - html

I'm developing a HTML5 multiplayer game. Google have been doing a couple of these lately, but haven't released any information on how they made them.
I want the connection between the clients and the server to be sockets; not the old long polling hack.
The storage should be nosql / google datastore.
The framework should be in Python or JS.
Now, I can't use websockets with Google App Engine, which means I have to use Google Compute Engine (GCE). How much of the service should I run on Compute Engine; 100% or only the sockets and the rest of the backend on AppEngine. This seems like a good way to do it, but the GCE is in Europe and App Engine doesn't support this location yet, which means the the GCE have to talk back and forth over the Atlantic.
I could on the other hand develop the whole solution on GCE, but what storage and developer library should I use? I could use the new Google Cloud Datastore, but if I understand it correctly, it's like a low level api for talking the the datastore. I like how ndb is high level with models and takes care for caching. And for the solution, should I use nodejs, django or something else?

Running your web frontends on App Engine while managing the websocket connection on Compute Engine, is similar to what Google did for recent Chrome web experiments (see the end of this blog post)
Check out the amazing World Wide Maze Chrome Experiment, developed by
the Chrome team in Japan. This game converts any web site of your
choice into an interactive, three dimensional maze, navigated remotely
via your smartphone. Compute Engine virtual machines run Node.js to
manage the game state and synchronization with the mobile device,
while Google App Engine hosts the game’s web UI. This application
provides an excellent example of the new kinds of rich, high
performance back end services enabled by Google Cloud Platform.
You should also be able to create App Engine applications in Europe after filling the following form or signing up for premier account.
Google Cloud Datastore allows you to share you data between App Engine (using NDB if you use Python) and Compute Engine (using the low level API).
You can follow this issue about NDB support for Google Cloud Datastore.

Related

Google Compute API Anonymous Requests

Just noticed I have thousands of anonymous requests hitting all of the compute engine api list endpoints. I have no instances running and I'm only using Firebase and Cloud Build, Source, and Registry. Please see attached screenshot of API metrics report.
Any reason for this?
compute engine metrics
On the backend there are certain API calls needed to make sure that your project is healthy, these "Anonymous" requests represent an account used by the backend service making health checks.
Anonymous API calls (this could be just Compute Engine “list” calls) doesn't imply having enabled something from your side. A lot of different sections in the Console make calls to the Compute Engine API and there’s no easy way to figure out which section made the calls, but they are expected.
These kind of "Anonymous" Compute Engine APIs are part of the internal Monitoring tools needed to make sure that your project is healthy and are randomly triggered. These metrics might eventually disappear and come back throughout the project life.

Considering Tyk API Gateway - open source version

Project background: Building an API driven Learning Management System. The back-end system will be receiving data from multiple systems and interfaces: web, mobile, VR.
Looking at API Gateways to front our APIs. Preferably an Open Source API gateway but need to be sure that the support and service is available. Tried out Tyk.io and it feels like it might be the way to go. Been reading other StackOverflow threads around this and looks like TYK's gateway fairs quite well against the likes of Kong and WSO2.
Main areas of consideration for us are:
Rate-limiting
Open ID Connect authentication
Analytics
Scalability
Hybrid model of hosting - combination of on-prem and cloud depending on compliance requirements of educational institutes (Probably rules of AWS' gateway)
It would be really helpful if anyone who is using or has used TYK.io for their production projects can share their experience, especially for enterprise clients/projects.
Full disclosure: I work for Tyk, so of course think that Tyk is the best fit for your project ;)
Seriously, though - Tyk can do all those things you’re after. Here are some links to the documentation for each item that is big on your list:
Rate-limiting
Open ID Connect authentication
Analytics
Scalability
Hybrid model of hosting
You can also post on the Tyk community for help, if you haven’t already, or search to see what else others have said.
The Tyk Open Source API Gateway will do everything you need, even outputting analytics to difference sources, like ElasticSearch, Mongo or just CSV.
In addition, you can also use our API Management Platform to control your open source gateway. The Tyk API Management platform includes a Dashboard with analytics and out-of-the-box developer portal. Tyk is free to use, under a developer license, to manage a single gateway node, ideal if you are doing a POC.
Hope this helps and please keep in touch to let us know more about your use case.

Box api-content developer account to production ready account

I currently have a developer account setup in box and looking for steps to move it to production. I cannot find details on
If there is number of users allowed
How to turn production mode on
I am have setup initial account with auth redirect url. Configured my app key and token in my web application.
In terms of "productizing" I've heard a few different ways this term
1) To just make an app generally useable among consumers of the third party app, all that is needed is a functional integration with Box APIs. Assuming that you have implemented oAuth correctly and integrated our APIs functionally, there is no barrier to everyday users to using that integration between Box + third party app.
2) To make an app available to Box users ("productionalize" is a term I hear often), the best way to do this is through our gallery. Developers can follow these instructions for creating a listing in our App Gallery: cloud.box.com/appgallerylisting

Additional Tutorials or worked examples of best practice for configuring multi vm projects in google compute engine

I was hoping people would know of more samples and best practice guides for configuring systems on google compute engine so I can gain more experience in deploying them and apply the knowledge to my own projects.
I had a look at https://developers.google.com/compute/docs/samples-and-videos#samples which runs through deploying cassendra cluster and hadoop using scripts but I was hoping there might be more available including on the following topics
Load balancing webservers across zones samples including configuring networking,
firewalls and load balancer.
Fronting tomcat servers with apache behind a load balancer
Multi network systems in compute engine using subnetting
Multi project systems and how to structure them for reliability and secure interoperability.
They would be easy to follow projects you build starting from a blank project and end up with a sample site running across multiple vm's & zones with recommended security in place, a bit like the videos you see for gae coding examples that go from hello world to something more complex but for infrastructure not code.
Does anyone know of any?
You may want to checkout https://cloud.google.com/developers/#resources for tutorial and samples as well as http://googlecloudplatform.github.io
I'm new to the forums so I can only post two links. Taking a quick look I see several topics that may be of interest to you:
Managing Hadoop Clusters on Compute Engine
Auto Scaling on the Google Cloud Platform
Apache Hadoop, Hive, and Pig on Google Compute Engine
Compute Engine Load Balancing in Action
I hope this helps!

Google Web Engine Api Channel vs Node.js+Socket.io

Please, help me to choose which one to use for my university project (I want to develop a shared multiuser whiteboard).
In particular, I am interested in the performance of message exchange between users and server using Channel API and Socket.io: which one is quicker and why?
I have implemented an initial version of the whiteboard http://jvyrushelloworld.appspot.com/ by following this tutorial: http://blog.greweb.fr/2012/03/30-minutes-to-make-a-multi-user-real-time-paint-with-play-2-framework-canvas-and-websocket/ The code I used is pretty much the same, except for the side and message exchange method: I used python, Google Channel API for message exchange; the guy who wrote the tutorial used Play 2 framework and Web sockets.
As you see, the web socket tutorial version works much faster (don't know if it is my mistake or google api channel performance issue). Of course, a lot of optimization can be done to improve the performance, but I wonder if it is worth to go on using Channel API for that project or is it better to switch to socket.io?