Is this possible to transport orders from a system in ECC 6.0 to a SAP R3 system in 46C? - sap-erp

Sounds not logic but I think there are many cases when a production system is in 46C and (for budget restrictions) the corresponding test system was recently upgraded to ECC 6.0 before doing the same in production.
If not possible, which is the best solution under the indicated scenario?

It may be technically possible to transport between 46C and ECC6, but it is advisable to employ a change freeze.
The main reason for this is that your test landscape now is significantly different from your productive system. It will be very hard to regression/integration test anything fully.
If you also changed from a unicode to a non-unicode system you should be particularly careful.
Custom programs may be reasonably safe to transport, but you have to measure the risk very carefully - particularly in business-critical processes.
Doing transports of SAP-Repairs or enhancements may have very unpredictable results (and may not even be allowed). It will really not be a good idea to transport any SAP objects, as the functionality is likely to have changed during the upgrade.
It may be necessary to create a second DEV/TEST landscape that is 46C in the interim, if you cannot afford a change freeze. However this will require dual maintenance, and can result in it's own set of issues.

Yes, it possible and feasible also. But after import of codes to non unicode system from unicode system some tweaking of program required according to coding standards of 4.6C.
We have done the same and executed complete project from 7.4 to 4.6C

Related

Term for a smallish program built on top a larger library/framework?

What exactly do you call a relatively smallish program/application built on top a larger library/framework?
The small program (foo) in question does a whole lot but much of the heavy lifting is done by the library (bar) on top of which this program is built. When I say that I designed/developed 'foo' with such and such capabilities I do not want to convey the wrong idea that I coded everything, including the low level stuff, all by myself.
Edit: Just to clarify, this is a numerical code built on top of a numerical library.
TL;DR, almost nobody is going to think you invented everything. If they do, it's a good opportunity to educate them high-level information about computers and software architecture.
In general, this is just an application, utility, or tool. Your runtime context may throw in additional adjectives (e.g. command-line tool, web application, etc).
I think your worries regarding attribution are probably unfounded. If your project is open source, your documentation will certainly need to list the build and runtime dependencies. If there are different licenses, you'll likely also have to ship those with your tool. So it's unlikely that anyone other than people entirely unfamiliar with software engineering would get the "wrong idea".
Furthermore, nearly every software package is built on top of some sort of toolkit. For example, even basic utilities like ls, cp, etc. are built on top of the standard C library and make use of system calls provided by the operating system. Indeed, without the OS, such utilities have no runtime environment in which to execute. The OS has nothing to do if there is no hardware for it to manage (and even some of that hardware is likely to have firmware -- which is just software-on-chip -- to control some of its behavior regardless of an operating system).
The higher up the stack you move, the harder it becomes for someone to mistake the work you did versus the work you built upon. A web application needs an HTTP server, possibly a module interface or CGI environment, a language to express the intent of the software, etc. And then all of this is built on top of the OS, which goes down to hardware, some firmware, etc.
Finally, even if the library does the heavy lifting, that doesn't detract from the value of your software. If your software does a number of very useful things, it doesn't matter whether the library enabled your software to do those things. Some of the most important inventions in history are super simple in retrospect. It just took someone to see how to combine the parts in a different way. This is effectively what we do with software.
If someone does seem to get the wrong idea, this is perhaps a good time to educate them about the complexities of computing environments, the interrelationships between software components, the software stack, etc. It also might be fine to just let it slide and say, "Thank you!"

What tools do distributed programmers lack?

I have a dream to improve the world of distributed programming :)
In particular, I'm feeling a lack of necessary tools for debugging, monitoring, understanding and visualizing the behavior of distributed systems (heck, I had to write my own logger and visualizers to satisfy my requirements), and I'm writing a couple of such tools in my free time.
Community, what tools do you lack with this regard? Please describe one per answer, with a rough idea of what the tool would be supposed to do. Others can point out the existence of such tools, or someone might get inspired and write them.
OK, let me start.
A distributed logger with a high-precision global time axis - allowing to register events from different machines in a distributed system with high precision and independent on the clock offset and drift; with sufficient scalability to handle the load of several hundred machines and several thousand logging processes. Such a logger allows to find transport-level latency bottlenecks in a distributed system by seeing, for example, how many milliseconds it actually takes for a message to travel from the publisher to the subscriber through a message queue, etc.
Syslog is not ok because it's not scalable enough - 50000 logging events per second will be too much for it, and timestamp precision will suffer greatly under such load.
Facebook's Scribe is not ok because it doesn't provide a global time axis.
Actually, both syslog and scribe register events under arrival timestamps, not under occurence timestamps.
Honestly, I don't lack such a tool - I've written one for myself, I'm greatly pleased with it and I'm going to open-source it. But others might.
P.S. I've open-sourced it: http://code.google.com/p/greg
Dear Santa, I would like visualizations of the interactions between components in the distributed system.
I would like a visual representation showing:
The interactions among components, either as a UML collaboration diagram or sequence diagram.
Component shutdown and startup times as self-interactions.
On which hosts components are currently running.
Location of those hosts, if available, within a building or geographically.
Host shutdown and startup times.
I would like to be able to:
Filter the components and/or interactions displayed to show only those of interest.
Record interactions.
Display a desired range of time in a static diagram.
Play back the interactions in an animation, with typical video controls for playing, pausing, rewinding, fast-forwarding.
I've been a good developer all year, and would really like this.
Then again, see this question - How to visualize the behavior of many concurrent multi-stage processes?.
(I'm shamelessly refering to my own stuff, but that's because the problems solved by this stuff were important for me, and the current question is precisely about problems that are important for someone).
You could have a look at some of the tools that come with erlang/OTP. It doesn't have all the features other people suggested, but some of them are quite handy, and built with a lot of experience. Some of these are, for instance:
Debugger that can debug concurrent processes, also remotely, AFAIR
Introspection tools for mnesia/ets tables as well as process heaps
Message tracing
Load monitoring on local and remote nodes
distributed logging and error report system
profiler which works for distributed scenarios
Process/task/application manager for distributed systems
These come of course in addition to the base features the platform provides, like Node discovery, IPC protocol, RPC protocols & services, transparent distribution, distributed built-in database storage, global and node-local registry for process names and all the other underlying stuff that makes the platform tic.
I think this is a great question and here's my 0.02 on a tool I would find really useful.
One of the challenges I find with distributed programming is in the deployment of code to multiple machines. Quite often these machines may have slightly varying configuration or worse have different application settings.
The tool I have in mind would be one that could on demand reach out to all the machines on which the application is deployed and provide system information. If one specifies a settings file or a resource like a registry, it would provide the list for all the machines. It could also look at the user access privileges for the users running the application.
A refinement would be to provide indications when settings are not matching a master list provided by the developer. It could also indicate servers that have differing configurations and provide diff functionality.
This would be really useful for .NET applications since there are so many configurations (machine.config, application.config, IIS Settings, user permissions, etc) that the chances of varying configurations are high.
In my opinion, what is missing is a distributed programming platform...a platform that makes application programming over distributed systems as transparent as non-distributed programming is now.
Isn't it a bit early to work on Tools when we don't even agree on a platform? We have several flavors of actor models, virtual shared memory, UMA, NUMA, synchronous dataflow, tagged token dataflow, multi-hierchical memory vector processors, clusters, message passing mesh or network-on-a-chip, PGAS, DGAS, etc.
Feel free to add more.
To contribute:
I find myself writing a lot of distributed programs by constructing a DAG, which gets transformed into platform-specific code. Every platform optimization is a different kind of transformation rules on this DAG. You can see the same happening in Microsoft's Accelerator and Dryad, Intel's Concurrent Collections, MIT's StreaMIT, etc.
A language-agnostic library that collects all these DAG transformations would save re-inventing the wheel every time.
You can also take a look at Akka:
http://akka.io
Let me notify those who've favourited this question by pointing to the Greg logger - http://code.google.com/p/greg . It is the distributed logger with a high-precision global time axis that I've talked about in the other answer in this thread.
Apart from the mentioned tool for "visualizing the behavior of many concurrent multi-stage processes" (splot), I've also written "tplot" which is appropriate for displaying quantitative patterns in logs.
A large presentation about both tools, with lots of pretty pictures here.

Advantages/Disadvantages of Refactoring Tools

what are the advantages and disadvantages of refactoring tools, in general?
Advantage
You are more likely to do the refactoring if a tool helps you.
A tool is more likely to get “rename” type refactoring right first time then you are.
A tool lets you do refactoring on a codebase without unit tests that you could not risk doing by hand.
A tool can save you lots of time.
Both the leading tools (RefactorPro/CodeRush and Resharper) will also highlight most coding errors without you having to a compile
Both the leading tools will highlight were you don’t keep to their concept of best practises.
Disadvantages
Some times the tool will change the meaning of your code without you expecting it, due to bags in the tool or use of reflection etc in your code base.
A took may make you feel safe with less unit tests…
A tool can be very slow…, so for renameing locals vars etc it can be quicker to do it by hand.
A tool will slow down the development system a lot, as the tool as to keep is database updated while you are editing code.
A tool takes time to learn.
A tool push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage.
A tool will have a large memory footprint for a large code base, however memory is cheep these days.
No tool will cope well with very large solution files.
You will have to get your boss to agree to paying for the tool, this may take longer then the time the tool saves.
You may have to get your IT department to agree to you installing the tool
You will be lost in your next job if they will not let you use the same tool :-)
Advantage: the obvious one: speed.
Disadvantages:
they push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage;
I've only tried one, with VS, and it slowed down the app noticeably. I couldn't decide if it was worth it but had to rebuild the machine and haven't re-installed it so I guess that tells you.
Code improvement suggestions. (can be
both advantage and disadvantage)
Removes code noise (advantage)
Renaming variables, methods (advantage)
I'd say that the speed of making code changes or writing code is the biggest advantage. I have CodeRush and I am lost without it.
I'd say the biggest disadvantage is the memory footprint, if you are tight on memory then its probably going to hurt more than help. But I've got 4Gb and 8Gb on each of my dev boxes so I don't really notice. (Not that they take huge amounts of memory, but if you are 2Gb or less then it is going to be noticeable)
Also, I've noticed that the two big refactoring tools for .NET (RefactorPro/CodeRush and Resharper) both have problems with web site projects (A legacy inheritance so out of my control) with their code analysis/suggestion engine. Seems to think everything is bad (actually, that's probably a fairly accurate assessment for a web site project, but I don't want to be reminded of it constantly)

Benefits of cross-platform development?

Are there benefits to developing an application on two or more different platforms? Does using a different compiler on even the same platform have benefits?
Yes, especially if you plan to distribute your code for multiple platforms.
But even if you don't cross platform development is a form of futureproofing; if it runs on multiple (diverse) platforms today, it's more likely to run on future platforms than something that was tuned, tweeked, and specialized to work on a version 7.8.3 clean install of vendor X's Q-series boxes (patch level 1452) and nothing else.
There seems to be a benefit in finding and simply preventing bugs with a different compiler and a different OS. Different CPUs can pin down endian issues early. There is the pain at the GUI level if you want to stay native at that level.
Short answer: Yes.
Short of cloning a disk, it is almost impossible to make two systems exactly alike, so you are going to end up running on "different platforms" whether you meant to or not. By specifically confronting and solving the "what if system A doesn't do things like B?" problem head on you are much more likely to find those key assumptions your code makes.
That said, I would say you should get a good chunk of your base code working on system A, and then take a day (or a week or ...) and get it running on system B. It can be very educational.
My education came back in the 80's when I ported a source level C debugger to over 100 flavors of U*NX. Gack!
Are there benefits to developing an application on two or more different platforms?
If this is production software, the obvious reason is the lure of a larger client base. Your product's appeal is magnified the moment the client hears that you support multiple platforms. Remember, most enterprises do not use a single OS or even a single version of the OS. It is fairly typical to find a section using Windows, another Mac and a smaller version some flavor of Linux.
It is also seen that customizing a product for a single platform is often far more tedious than to have it run on multi-platform. The law of diminishing returns kicks in even before you know.
Of course, all of this makes little sense, if you are doing customization work for an existing product for the client's proprietary hardware. But even then, keep an eye out for the entire range of hardware your client has in his repertoire -- you never know when he might ask for it.
Does using a different compiler on even the same platform have benefits?
Yes, again. Different compilers implement different extensions. See to it that you are not dependent on a particular version of a particular compiler.
Further, there may be a bug or two in the compiler itself. Using multiple compilers helps sort these out.
I have further seen bits of a (cross-platform) product using two different compilers -- one was to used in those modules where floating point manipulation required a very high level of accuracy. (Been a while I've heard anyone else do that, but ...)
I've ported a large C++ program, originally Win32, to Linux. It wasn't very difficult. Mostly dealing with compiler incompatibilities, because the MS C++ compiler at the time was non-compliant in various ways. I expect that problem has mostly gone now (until C++0x features start gradually appearing). Also writing a simple platform abstraction library to centralize the platform-specific code in one place. It depends to what extent you are dependent on services from the OS that would be hard to mimic on a new platform.
You don't have to build portability in from the ground up. That's why "porting" is often described as an activity you can perform in one shot after an initial release on your most important platform. You don't have to do it continuously from the very start. Purely for economic reasons, if you can avoid doing work that may never pay off, obviously you should. The cost of porting later on, when really necessary, turns out to be not that bad.
Mostly, there is an existing platform where the application is written for (individual software). But you adress more developers (both platforms), if you decide to provide an independent language.
Also products (standard software) for SMEs can be sold better if they run on different platforms! You can gain access to both markets, WIN&LINUX! (and MacOSx and so on...)
Big companies mostly buy hardware which is supported/certified by the product vendor only to deploy the specified product.
If you develop on multiple platforms at the same time you get the advantage of being able to use different tools. For example I once had a memory overwrite (I still swear I didn't need the +1 for the null byte!) that cause "free" to crash. I brought the code up to speed on Windows and found the overwrite in about 1 minute with Rational Purify... it had taken me a week under Linux of chasing it (valgrind might have found it... but I didn't know about it at the time).
Different compilers on the same or different platforms is, to me, a must as each compiler will report different things, and sometimes the report from one compiler about an error will be gibberish but the other compiler makes it very clear.
Using things like multiple databases while developing means you are much less likely to tie yourself to a particular database which means you can swap out the database if there is a reason to do so. If you want to integrate something that uses Oracle into a existing infrastructure that uses SQL Server for example it can really suck - much better if the Oracle or SQL Server pieces can be moved to the other system (I know of some places that have 3 different databases for their financial systems... ick).
In general, always developing for two or three things means that the odds of you finding mistakes is better, and the odds of the system being more flexible is better.
On the other hand all of that can take time and effort that, at the immediate time, is seen as an unneeded expense.
Some platforms have really dreadful development tools. I once worked in an IB where rather than use Sun's ghastly toolset, peole developed code in VC++ and then ported to Solaris.

Have you ever used code virtualizer or vmprotect to protect from reverse engineering?

I know that there is no way to fully protect our code.
I also know that if a user wants to crack our app, then he or she is not a user that would buy our app.
I also know that it is better to improve our app.. instead of being afraid of anticracking techniques.
I also know that there is no commercial tool that can protec our app....
I also know that....
Ok. Enough. I've heard everything.
I really think that adding a little protection won't hurt.
So.... have you ever used code virtulizer from oreans or vmprotect?
I've heard that they are sometimes detected as virus by some antivirus.
Any experiences that I should be aware of before buying it.
I know it creates some virtual machines and obfuscates a little the code to make it harder to find the weaknesses of our registration routines.
Is there any warning I should know?
Thanks.
Any advice would be appreciated.
Jag
In my humble opinion, you should be lucky or even eager to be pirated, because that means your product is successful and popular.
That's plain incorrect. My software that I worked many months on was cracked the moment it was released. There are organised cracking groups that feed off download.com's RSS channel etc and crack each app that appears. It's a piece of cake to extract the keygen code of any app, so my response was to:
a) resort to digital certificate key files which are impossible to forge as they are signed by a private AES key and validated by a public one embedded in the app (see: aquaticmac.com - I use the stl c++ implementation which is cross-platform), along with.
b) The excellent Code Virtualizer™. I will say that the moment I started using Code Virtualizer™ I was getting some complaints from one or two users about app crashes. When I removed it from their build the crashes ceased. Still, I'm not sure whether it was a problem with CV per se as it could have been an obscure bug in my code, but I since reshuffled my code and I have since heard no complaints.
After the above, no more cracks. Some people look at being cracked as a positive thing, as it's a free publicity channel, but those people usually haven't spent months/years on an idea only to find you're being ripped off. Quite hard to take.
Unfortunately, VM-protected software is more likely to get affected by false positives than conventional packing software. The reason for that is that since AV protection is so complicated, AV software are often unable to analyze the protected code, and may rely on either pattern libraries or may issue generic warnings for any files protected by a system it can't analyze. If your priority is to eliminate false positives, I suggest picking a widely-used protection solution, e.g. AsProtect (although Oreans' products are becoming quite popular as well).
Software VM protection is quite popular today, especially as it's now available at an accessible price for small companies and independent software developers. It also takes a considerable amount of effort to crack in comparison to non-VM techniques - the wrappers usually have the standard anti-debugging tricks that other protections have, as well as the VM protection. Since the virtual machine is generated randomly on each build, the crackers will need to analyze the VM instruction set and reverse engineer the protected code back to machine code.
The main disadvantage of VM protection is that if it's overused (used to protect excessive parts of the code), it can slow down your application considerably - so you'll need to protect just the critical parts (registration checks, etc). It also doesn't apply to certain application types - it likely won't work on DLLs that are used for injection, as well as device drivers.
I've also heard that StrongBit EXECryptor is a decent protection package at a decent price. (I'm not affiliated with said company nor guarantee any quality what-so-ever, it's just word of mouth and worth checking out IMO).