SAP Spartacus Translations - angular9

Is there a possibility disable SAP Spartacus I18next module in order to use my own translation module/strategy?
I'm trying to use my own shared module with translations but it is based on i18next library same as inspartacus/core, it seems like they are conflicting, because separately they work good.

It is possible but it would require a lot of work. I18nModule is a core part of the project and is imported in the StorefrontFoundationModule that is used in the in the StoreFrontModule and so on.
Therefore, to remove it would require to import all of the modules imported in the StorefrontFoundationModule, StoreFrontModule and B2cStorefrontModule. Directly to your AppModule. This is doable but the app is likely not to work.
Many components and services depend on translations so you would need to make sure your custom translation is provided in a way that it satisfies those dependencies.
Basically I am saying you are better of trying to extend or override the Spartacus translation functionalities to fit your use case. This module is configurable, extendable and powerful. Feel free too take a look at our documentation on the subject https://sap.github.io/cloud-commerce-spartacus-storefront-docs/i18n/#page-title.

Related

How does one make a self-contained package / library of functions

I am writing some support code in the common subset of Matlab/Octave, which comes in the form of a bunch of functions. Let's call it a package.
I want to be able to organize the package, i.e.,
put all the relevant function files in a single place, where users
are not supposed to store their code;
have some internal organization ('subpackages');
prevent namespace pollution;
have some mechanism for user code to 'import' parts of the package;
I don't necessarily want all functions I provide to be
visible from user clients.
On the Matlab side of things, this functionality is pretty much provided by package directories and the 'import' mechanism. This functionality doesn't appear to be available in Octave though (as of 3.6.1).
Given that, I wonder what options remain for organizing my support code package in Octave.
The option of putting everything in a directory and just have the user code do an ADDPATH feels rather unrefined, and doesn't give the level of control I want -- it only addresses point #1 of the list above.
There is plenty documentation here and examples in OctaveForge. Just browse the SVN.
Also there are personal packages all around. For example this one
Happy coding!

Using HTML Helpers in Node.js?

There are so many template engines for node.js and express and there is even this detailed comparison: http://paularmstrong.github.com/node-templates/index.html This led me to check out EJS, Mu2 and JQTpl and I spent some hours on experimenting which of them fits my needs best.
I know that there already are several questions concerning which framework is best, but none of them concentrates on the possibility of using helpers. I tried to build a form helper (which should render input tags and their values if I pass an object into it) together with all of them but I did not find a straight forward way accomplishing it.
Are there any recommendable modules that enable me to use helpers? Maybe even using mustache.js (which - for me - feels like the best of the ones I tried)? Thanks in advance!
I can't point you to the comparison you are looking for, but nearly all the templating engines I've looked at have had a facility for helpers.
If you are using Express (which you mentioned in your question), you can tell Express what helpers you want to expose to whatever template engine you are using (set via the "view engine" app variable) - see the following sections of the Express Guide for details:
View Rendering - explains how to configure Express to use a particular templating engine. The example refers to Jade, which is installed with Express by default, and does support helpers.
Server.helpers() - How to register static view helpers to be passed to your template
Server.dynamicHelpers() How to register helpers which can access the Request and Response objects
Some template engines come with support for Express built in, although they may require an extra configuration step. I am partial to CoffeeKup (and the more updated fork coffeecup), which lets you write your views in Coffeescript; enabling auto-compilation requires and extra call to the Express server object (and covered in the docs):
app.register('.coffee', coffeecup.adapters.express);
Others may offer an additional node package, you may find npm search express- instructive. For example, express-handlebars specifically fixes up app.helpers() and app.dynamicHelpers() to work with handlebars. (Disclaimer: I haven't used this module personally).

Extending embedded Python in C++ - Design to interact with C++ instances

There are several packages out there that help in automating the task of writing bindings between C\C++ and other languages.
In my case, I'd like to bind Python, some options for such packages are: SWIG, Boost.Python and Robin.
It seems that the straight forward process is to use these packages to create C\C++ linkable libraries (with mostly static functions) and have the higher language be extended using them.
However, my situation is that I already have a developed working system in C++ therefore plan to embed Python into it so that future development will be in Python.
It's not clear to me how, and if at all possible, to use these packages in helping to extend embedded Python in such a way that the Python code would be able to interact with the various Singleton instances already running in the system, and instantiate C++ classes and interact with them.
What I'm looking for is an insight regarding the design best fitted for this situation.
Boost.python lets you do a lot of those things right out of the box, especially if you use smart pointers. You can even inherit from C++ classes in Python, then pass instances of those back to your C++ code and have everything still work. My favorite resource on how to do various stuff is this (especially check out the "How To" section): http://wiki.python.org/moin/boost.python/ .
Boost.python is especially good if you're using smart pointers or intrusive pointers, as those translate transparently into PyObject reference counting. Also, it's very good at making factory functions look like Python constructors, which makes for very clean Python APIs.
If you're not using smart pointers, it's still possible to do all the things you want, but you have to mess with various return and lifetime policies, which can give you a headache.
To make it short: There is the modern alternative pybind11.
Long version: I also had to embed python. The C++ Python interface is small so I decided to use the C Api. That turned out to be a nightmare. Exposing classes lets you write tons of complicated boilerplate code. Boost::Python greatly avoids this by using readable interface definitions. However I found that boost lacks a sophisticated documentation and dor some things you still have to call the Python api. Further their build system seems to give people troubles. I cant tell since i use packages provided by the system. Finally I tried the boost python fork pybind11 and have to say that it is really convenient and fixes some shortcomings of boost like the necessity of the use of the Python Api, ability to use lambdas, the lack of an easy comprehensible documentation and automatic exception translation. Further it is header only and does not pull the huge boost dependency on deployment, so I can definitively recommend it.

Hiding Complexity by Building Concise Libraries

I'm developing a product with a bunch of interlocking pieces (server, client, libraries, etc) and one of the pieces is a tiny library that users will link into their own client-side code (something kind of like the Flickr API or the Google Maps API). Once they've included that library, all of the interlocking bits magically hook themselves together. So API simplicity is a major, important goal.
The API that I expose to users has a grand total of two classes and seven public methods. Easy peasy, lemon-squeezy.
But the simplicity is a carefully crafted illusion. The library I'm distributing actually depends on another library, with 136 classes of its own (and more than a thousand public methods). During the build process, I link the two libraries together into a single deliverable, for ease of integration and deployment by the API consumer.
The problem I'm facing now is that I don't want the end user (an application developer integrating my software to enhance their own functionality) to ever be bothered with all that extra cruft, drowning in a torrent of unnecessary complexity.
From the outside, the library should look like it contains exactly two public classes, with exactly seven public methods.
How do you handle this sort of thing in your own projects? I'm interested in the language agnostic solutions, as well as the various techniques for different languages, compilers, and build tools.
In my specific case, I'm developing for the flash platform (AIR/Flex/Actionscript) with SWC library files. The build methodology is analagous to the Java platform, where all classes are bundled into a zipped code module with equal visibility (an Actionscript SWC file is, conceptually, almost exactly identical to a Java JAR file).
Doesn't .NET have an "internal" modifier for classes and methods? That's exactly the sort of thing I'm looking for, and if anyone knows of a tricky technique to hide the visibility of classes between SWC boundaries, I'd love to hear it.
It's pretty hard to hide things in AS. There is an internal access specifier and there are also namespaces. Adobe has some help on Packages and namespaces that may be useful to you.
It is important to note that namespaces do not limit access - they are really used to place symbols into a different ... well namespace. This could be used to have 2 versions of the same library accessed in the same swf. My guess is it just does some name-mangling behind the scenes before inserting definitions into the symbol table. If users want, they can just import the namespace and access anything that is "hidden" behind it. I've done that when hacking apart Adobe components. That said, if the user doesn't have the original source and is incapable of determining the namespace identifier than you have a bit of security through obscurity.
package access specifiers (e.g. private and internal) are closer to what you want. But if you can access classes outside package boundaries then the user can too. There are even hacks I've seen around that can examine a swfc and spit out a list of embedded classes which one can use getClassByDefinition to instantiate.
So, you can hide the classes existence in your documentation, use internal and private access specifiers wherever possible and then use namespaces to mangle the classnames. But you cannot prevent a determined person from finding and using these classes.
I think you can pull this off by using namespaces:
http://livedocs.adobe.com/flash/9.0/main/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000040.html
Notice that namespaces is not the same in actionscript as in C#, it is more like namespaces in xml.
Incidentally, one of the other tricks that I've used (since I didn't know about the "internal" modifier or namespaces) is to hide classes by declaring them outside the current package, like this:
package com.example {
public class A {
// ...
}
}
class B {
// ...
}
class C {
// ...
}
I've even though about writing a little tool that will analyze all the "import" directives within a project and move all external dependencies into these kinds of hidden private classes.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete