How do I use a Forge mod in ModCoderPack without using Forge API [closed] - reverse-engineering

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I would like to use a forge mod in ModCoderPack but don’t want to have forge SRC included. Is there a way to add a mod’s source code to vanilla sources to use in my client? Or how can I port a forge mod into the vanilla sources ModCoderPack generated? Any help would be great! Thanks!
P.S. I have permission to use the mod in my client.

There's no easy method or tool to achieve what you're after. Primarily because of the following differences:
MCP:
[ModCoderPack] was created to help mod creators to decompile, change and recompile the Minecraft classes.
Forge:
[Forge] is a massive API and mod loader used by modders to hook into Minecraft code and create mods. Forge contains numerous hooks into the Minecraft game engine, allowing modders to create mods with a high level of compatibility
So Forge creates an accessible API for developing easily maintainable mods whereas MCP is used to create modified versions of Minecraft client/server, distinct from the concept of a mod.
Forge is built on top of MCP and adds lots of new functionality. Depending on your mods complexity, it would take a tremendous amount of effort to rewrite the mod directly into Minecraft's deobfuscated source code.
Unless you have very good reasons to need this, like the creator of MCP said, just use Forge!

Related

How to upload a website without an html files [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
H!
I have created a website, where all the files are of the type CSS, js, pug, and when I want to publish the site, I need to give an index.html file from which the site will start. The problem is that I do not have such a file.
Does anyone know how to deal with such a problem?
And in addition, I started the site by running it in localhost: 3000 does anyone know how to start it now so that it will work when I upload it.
Thanks in advance to all the helpers.
Your mention of localhost:3000 implies that you have written a website which depends on Node.js for server-side code (at a minimum this will involve the translation of your Pug templates into HTML on demand).
There are two general approaches you can take to solve this problem:
Find hosting which supports your server-side code and deploy your Node.js application to it. (This will not be typical static or shared hosting).
Generate static HTML documents from your application and upload those HTML documents. (The specifics will depend on exactly what your server-side implementation does and will probably be a significant amount of work. Typically if you wanted to take this approach, you would have used a framework designed to output static sites from the outset).
Obviously if you have your server-side code processing user input (such as form submissions) option 2 will not work.

How well does Sitecore 7 lend itself to presenting external JSON data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm asking because I am not sure what kind of person we'll need to hire (ASP? Sitecore? Angular? JQuery) to implement the following for us:
Our school is looking to make data on courses (JSON format, about 600 courses) available as an “online catalog.” The static info (programs information, resources, etc.) will be hosted in Sitecore 7.
We’d like to see the online course catalog closely integrated with the rest of the site, so we’re looking for best approaches on how to do that.
Some manipulation of the JSON data is required: course detail pages should be simple enough, but we’ll also need to have course listings (not necessarily displaying all 600 courses at once, in one long list, but segmented by programs, class formats & locations, etc) as well as a “course search” functionality.
Would Sitecore do that well enough out-of-the-box, or would it be better/easier to go with something like Angular JS on top of Sitecore?
Please ask me for additional info if I had left something important out or if anything is unclear.
I agree with Dijkgraaf comment but to provide you with answer; Sitecore is suitable for your requirements but is a framework which means out of the box it won't meet your requirement so you will need a Developer who knows Sitecore and by extension .NET (Sitecore is built on .NET).
These developers will also know how to work with JSON, most likely serving it up from Sitecore via a .NET technology called Web API. The JSON can then be manipulated with Javascript or AngularJS. It is not as common for Sitecore developers to be familiar with AngularJS however.

Ruby Application Built Locally, How do I Put it on to a Website? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm having a frustrating time with Ruby. I've been working on developing a program that randomly generates a string from two arrays, which I'm all fine and dandy on that front. I'm fluent in the language. The problem I am having is with putting it online. I've been trying to figure out rails with no luck running my piece of software. Anyone able to help me along the way here? I'm unsure how to get this thing into a website for launch
Thanks in advance!
Sounds like Rails is overkill for this. If you only plan on having very simple actions, Sinatra might be better for you. http://www.sinatrarb.com/intro.html
After creating a simple API in Sinatra, you can then serve static assets (HTML, JS, CSS) where you can then use simple AJAX calls to hit your Sinatra backend.
You can get a Heroku account for free to host your application. Hosting a Sinatra application on Heroku should be pretty easy: http://blog.kuntz.co/2014/03/15/deploying-a-sinatra-app-to-heroku.html
You have a lot of options when it comes to deploying your web app.
A lot will depend on the scale of the application, budget and preference.
You will have mainly 2 main options.
Ready to use options
Ex. heroku, enginyard
Do it yourself options
Normal servers Ex. Digital ocean
After you make this decision you will find it way easier to deploy your app.
In case of heroku(its free I recommend it for start) reference this tutorial
If you do it on your own you will have to deiced which server will you use
Nginx, apache or others
Check out this

API development - Design considerations [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
In my 4 years of experience,I have developed a lot of web applications. Now, the concept of programmable web getting more and more popular, new APIs are being released almost everyday. I would like to develop a java API/library for a few of these endpoints.Ex stackapps,reddit,digg etc... What I would like to know from you people is ,
How is the API of the regular web
apps differ from the API of these
libraries. Or what is the difference
between these two from design
perspective
What are the best API development
practices.
What are all the factors that I need to consider before designing the API
.
Please comment, if the details are not sufficient.
Stability
If you offer an API to your web app, it is probably because you want other people to build applications using it. If it is not stable they will hate you for forcing them to follow through your frequent changes. If this takes too long, their site might remain non-functional for a long time while they are figuring out the new way of doing things in your API.
Compactness
You want the API to be complete but compact, as in not too much to remember.
Orthogonality
Design it so there is one and only one way to change each property or trigger an action. Actions in an orthogonal API should have minimal (if ever) side effects.
Also, it's not a good practice to remove a feature from a public API once released.
Security and Authentication
Since the API is web-exposed, you will have to authenticate each request and grant appropriate access. Security common sense applies here.
Fast Responses or Break into pieces
I believe in a web environment we should have fast responses and avoid requests that will take too long to complete. If it's unavoidable then it is better to send an ACK and break the task into several pieces and subsequent calls.
From my experience, all good API were not made to solve a generic problem, but to solve a problem for some that requires a certain abstraction. This abstraction is then evolving as the requirement and/or the underlying layer change.
So instead of finding the API that will do it all, I'd start by finding one or two good case problem were your API could help.

Anybody knows a good extendable open source web-crawler? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The crawler needs to have an extendable architecture to allow changing the internal process, like implementing new steps (pre-parser, parser, etc...)
I found the Heritrix Project (http://crawler.archive.org/).
But there are other nice projects like that?
Nutch is the best you can do when it comes to a free crawler. It is built off of the concept of Lucene (in an enterprise scaled manner) and is supported by the Hadoop back end using MapReduce (similar to Google) for large scale data querying. Great products! I am currently reading all about Hadoop in the new (not yet released) Hadoop in Action from manning. If you go this route I suggest getting onto their technical review team to get an early copy of this title!
These are all Java based. If you are a .net guy (like me!!) then you might be more interested in Lucene.NET, Nutch.NET, and Hadoop.NET which are all class by class and api by api ports to C#.
You May also want to try Scrapy http://scrapy.org/
It is really easy to specify and run your crawlers.
Abot is a good extensible web-crawler. Every part of the architecture is pluggable giving you complete control over its behavior. Its open source, free for commercial and personal use, written in C#.
https://github.com/sjdirect/abot
I've discovered recently one called - Nutch.
If you're not tied down to platform, I've had very good experiences with Nutch in the past.
It's written in Java and goes hand in hand with the Lucene indexer.