Is Immutable.js v4.0.0-rc.12 ready for production? - immutable.js

I would like to use immutable.js, and see that v4.0.0 is at rc.12. That version string seems to indicate that v4.0.0 is not yet released. Is that because there is a problem with v4.0.0, and we should use 3.8.2, which is three years old? Is there a good fork that publishes v4.0.0 (or higher), so I can depend on it without raising questions about depending upon pre-release packages?

Last autumn a new group of devs have started work to finish 4.0 at the immutable-js-oss repo. It's not clear yet whether (or when) we can merge it back into the main repo (otherwise it will be released as a new package).
There are only a few things left to do for a final 4.0 release, 99% of the issues are already finished. If you are curious, have a look at the issues of the 4.0 milestone. The changes are mostly for compatibility with version 3.x and a few cases for handling very special data (e.g when passing nodejs objects to isolated sub-processes).
Our electron app uses RC12 since 2019, before it was on RC9 for a year. So far none of the 10'000 pilots using it crashed a plane, so I would say it works quite well.

Related

Future of native-client after WebAssembly Post-MVP

I am sure after WebAssembly Post-MVP, asm.js will be deprecated. Even now, a few existing asm.js projects already start to use WebAssembly. JS engine (V8) also starts to comiple asm.js to WebAssembly, so even if old projects never migrate, end-users will still get partial advantages from WebAssembly.
My question is, what about native-client then? It is not implemented in JS engine so that can be a problem. Native-client seems to be deprecated even now. Will native-client be completely removed from Chrome in foreseeable future? I would love to see some reduction in binary size of Chrome.
Side questions:
After thread/gc/simd/exception are included in WebAssembly, is there still something native-client has but missing in WebAssembly (blocking migration)?
It took WebAssembly about 2 years just to reach MVP, what is the expected time for any one of the Post-MVP to get finalized?
It seems like WebAssembly group is tackling multiple Post-MVP features at once instead of one by one, won't that make it slower to finalize one of them?
Answering the side-questions only, because I no longer work on Native Client. Google's plans are its own to speak for, so I'll make this a wiki.
Update as of 2017/05/20 NaCl isn't in glibc. This was the original libc which was supported, and took quite a while to clean up and upstream. It was only ever supported by NaCl's GCC toolchain. There's still support for musl libc, which works with the more up-to-date LLVM-based NaCl and PNaCl toolchain.
Update as of 2017/05/30 the Chromium team announced the fate of PNaCl and tentative roadmap of WebAssembly features.
Here are some Native Client features which you haven't mentioned:
Out-of-process, which many consider a bug because it forces asynchronous communication. It allows guaranteed address space, which with 64-bit isn't much of an upside, and was critical to Native Client's double-sandbox design. WebAssembly doesn't have processes yet.
Has blocking to and from the JavaScript event loop, through postMessageAndAwaitResponse. Also seen as a bug by many.
Has many APIs through Pepper. Many mirror the web platform's APIs.
Can do memory protection through mprotect (though execute pages are limited).
POSIX signals can be handled.
Supports computed goto and other irreducible control flow.
Has some Just-in-Time code patching support.
Supports weaker than seq_cst atomics.
Has support for inline assembly, as long as it follows the NaCl validation rules.
Not all of these are in Portable Native Client, though. There's official documentation of differences.
There's no timeline for any of the post-MVP WebAssembly features. We don't want to rush anything, but we want to deliver one the most useful things first. It's a community group, so priorities are really driven by whoever gets involved. Implementations won't be able to tackle features all at once, but exploration parallelizes well.

SQLAlchemy 1.0 release

We have a reasonably big project in Django, that had started to push at the limitations of Django (we mostly use Django for database-related stuff, not web interface), and we decided to switch to SQLAlchemy, while it's still possible (we don't want to get ourselves in this position:).
The problem is, it really seems this is the worst time we could have picked. SQLA is on the verge of releasing version 1.0, which will probably be a big change in the interface. More importantly, it seems like there is some trouble with releasing it: more than a month ago, Mike Bayer tweeted that release candidate will be available via pip --pre, but it still hasn't happened.
Docs are updated to 1.0, and the bitbucket repo shows no diff between master and 1_0 branch. If this were Django, I'd just clone a repository and install it directly - there is an official blessing for such method in Django documentation. But I can't see any hint that this is "accepted behaviour" in SQLA community. For example, installation page doesn't mention 1.0 at all.
Am I too paranoic? Should we just use 0.9.8, and then make a few changes when 1.0 comes out? Or should we build 1.0 manually? Or it would be better to wait? (How much? I realize SQLA team doesn't want to heap up pressure to themselves by talking about an release date, but Mike has kinda already done that with that tweet.:)
I realize this is not exactly an objective question, but someone having a knowledge of SQLA process might have valuable advice. For example, if someone asked me same thing about Django 2.0, I'd tell them "if it isn't a mission critical app, just clone and build from the newest repo state - the chance of breaking is small, and you're getting much better interface". And I'd have official docs behind me.
As of the day of rewriting this answer, to answer how to choose from SQLAlchemy 0.9.8 (stable version released on October 13, 2014) or 1.0 ("upcoming" version), personally I will pick the stable version.
As a software life cycle, beta / bleeding edge / nightly build versions tend to have more bugs or breaking changes, which will directly lead to breaking up your system / script.
Therefore, choosing a stable version is more appropriate in most cases, unless you want to have the new feature in the beta version.
Last, there are usually migration guides to upgrade your version, but not downgrade your version. In some cases (but probably not in SQLAlchemy case), upgrade is sometimes irreversible.

How can I enforce Mercurial clients to use a specific version of Mercurial?

As new versions of Mercurial are released, we want to somehow enforce that developers, designers, and others are using the approved (or later) version of Mercurial. For example, we are currently on version 1.8.3 I'd like someway of automatically preventing/denying users access our repositories using anything before 1.8.3, and allow any verson after. Can I do this in a hook?
We run our server on Windows and IIS. I thought about writing an IIS extension that returned 500 errors for clients not at the right version, but the client doesn't send any version information in its HTTP requests, just "mercurial/proto 1.0", which I assume means version 1.0 of Mercurial's HTTP protocol.
Is what I want possible? Are there any other ways to solve this?
We want to do this because Mercurial is case-sensitive. We are 100% on case-insensitive Windows. This causes us numerous case-folding collisions. We waste hours on them. Although Mercurial has improved its handling of case, it still has situations where case-folding issues can be introduced. As case-handling improvements are made in new versions, we want someway to enforce our users onto those versions so new problems aren't introduced that we have to spend time fixing.
Do you have strong reasons for wanting to enforce this? Mercurial has internal mechanisms that take care of version features themselves: If you create a repository with 1.8.3 that uses features not yet present in, say, 1.6, then an 1.6 client will refuse to interact with such a repository. In other words, mercurial itself denies access, but not based on version number, but based on actual features. This way, if a new release doesn't break compatibility either way, you can use both versions alongside each other without problems.
Internally, this works because mercurial adds a list of required features to the repository's metainformation, and checks this list before attempting to do anything with it. If the list contains a feature mercurial doesn't know about (yet), it concludes that it cannot meaningfully interact with it and refuses to cooperate.
So IMO the best thing to do would be to simply instate a policy that requires developers to use an up-to-date version, but not add additional technical measures. If someone tries to push with a version that's really too old, then they'll receive an error message, and if they complain, you can simply point them at the official policy.
If you're on Windows, how about deploying the Mercurial installer using Domain policies somehow?

Defining a runtime environment

I need to define a runtime environment for my development. The first idea is of course not to reinvent the wheel. I downloaded macports, used easy_install, tried fink. I always had problems. Right now, for example, I am not able to compile scipy because the MacPorts installer wants to download and install gcc43, but this does not compile on Snow Leopard. A bug is open for this issue, but I am basically tied to them for my runtime to be usable.
A technique I learned some time ago, was to write a makefile to download and build the runtime/libs with clearly specified versions of libraries and utilities. This predates the MacPorts/fink/apt approach, but you have much more control on it, although you have to do everything by hand. Of course, this can become a nightmare on its own if the runtime grows, but if you find a problem, you can use patch and fix the issue on the downloaded package, then build it.
I have multiple questions:
What is your technique to prepare a well-defined runtime/library collection for your development?
Does MacPorts/fink/whatever allows me the same flexibility of rehacking if something goes wrong ?
Considering my makefile solution, when my software is finally out for download, what are your suggestions about solving the potential troubles between my development environment and the actual platform on my user's machines ?
Edit: What I don't understand in particular is that other projects don't give me hints. For example, I just downloaded scipy, a complex library with lots of dependencies. Developers must have all the deps setup before working on it. Despite this, there's nothing in the svn that creates this environment.
Edit: Added a bounty to the question. I think this is an important issue and it deserves to get more answers. I will consider best those answers with real world examples with particular attention towards any arisen issues and their solution.
Additional questions to inspire for the Bounty:
Do you perform testing on your environment (to check proper installation, e.g. on an integration machine) ?
How do you include your environment at shipping time ? If it's C, do you statically link it, or ship the dynamic library, tinkering the LD_LIBRARY_PATH before running the executable? What about the same issue for python, perl, and other ?
Do you stick to the runtime, or update it as time passes? Do you download "trunk" packages of your dependency libraries or a fixed version?
How do you deal with situations like: library foo needs python 2.5, but you need to develop in python 2.4 because library bar does not work with python 2.5 ?
We use a CMake script that generates Makefiles that download (mainly through SVN)/configure/build all our dependencies. Why CMake? Multiplatform. This works quite well, and we support invocation of scons/autopain/cmake. As we build on several platforms (Windows, MacOSX, a bunch of Linux variants) we also support different compile flags etc based on the operating system. Typically a library has a default configuration, and if we encounter a system that needs special configuration the configuration is replaced with a specialized configuration. This works quite well. We did not really find any ready solution that would fit our purpose.
That being said, it is a PITA to get it up and running - there's a lot of knobs to turn when you need to support several operating systems. I don't think it will become a maintainance-nightmare as the dependencies are quite fixed (libraries are upgraded regularly, but we rarely introduce new one).
virtualenv is good, but it can't do magic - e.g. if you want use a library that just MUST have Python 2.4 and another one that absolutely NEEDS 2.5 instead, you're out of luck. Nor can virtualenv (or any other tool) help when there's a brand new release of an OS and half the tools &c just don't support it yet, as you mentioned for Snow Leopard: some problems are just impossible to solve (two libraries with absolutely conflicting needs within the same build), others just require patience (until all tools you need are ported to the new OS's release, you just need to stick with the previous OS release).

Dangers of Implementing Programming Frameworks into Project Source Code Prior to Release Candidate Status?

I've been dwelling on this topic for a long time now. I just wondered if anyone else out there shared my opinion. Isn't it essentially a bad idea integrating preview versions of programming frameworks into your project code before they are at release candidate level?!
I had a situation a few months ago where my boss insisted on using the Managed Extensibility Framework to handle dependency injection in a huge internal system we were building. We built the code around a preview version of this framework and then Microsoft released another version of it. We updated and everything broke, huge amounts of code had to be re-understood and changed...total pain!
...I'm getting the feeling that Ria Services could present us with a similar problem (or any other framework chosen to be implemented into a projects source code prior to full release state).
Opinions welcome.
Well, what else can be said? You're right - using something not even marked as release candidate for core functionality in your app is a considerable risk.
To alleviate the risk you could try creating a compatibility layer that you could adjust to "translate" to new versions of the framework - but that involves a lot of guesswork that may not work out.
And of course you can just stick with the preview version, if it already does everything you need. But that will bring its own headaches down the road.
All in all, I'd avoid it unless the newfangled thing in question definitely enables you to do something important that would otherwise be impossible, or yields massive productivity gains.