Comparison between Officer and Reporters package for R - officer

I have some questions regarding the Officer and Reporter packages for R. I am working to automate word document generation and have started using Officer. My searches indicate that there is more written about Reporter on the web and that the package may have been under development for a longer period.
This leads me to ask whether there are any major formatting limitations in Officer compared to Reporter when producing Word documents? For example, I cannot see a function for adding page numbers in Officer.
A second question is to what extent are the packages cross-compatible? e.g can one use functions from Reporter in Officer, or find an equivalent function?
Any help would be greatly appreciated.

Related

What the difference between a code review and a code audit?

According to Wikipedia:
A software code audit is a comprehensive analysis of source code in a programming project with the intent of discovering bugs, security breaches or violations of programming conventions.
Source: Code audit
Code review is systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software.
Source: Code review
I can't see the difference here. Could somebody please pin-point the difference between code review vs. code audit?
Generally, A code audit involves the whole software, a code review focuses only on a part of it and may be included in the workflow to add code to your software (via pull/merge requests).
They are different in when and how big.
When
Code reviews are performed in 2 typical cases. First, a Pull Request coming, then the maintainer must do a code review for that PR. Second, to enhance the coding ability of team, we perform mutual code review once or twice per week.
Code audit is performed in 2 typical cases. Given I am your customer or QA/QC department in your company. In the first case, I want to assure your source code complying with the standards of ISO or PEP or alike. In the second case, I doubt that something seriously bad happening in the code. Either the case, I request you to perform a code audit.
How big
Code reviews are performed with bit-sized files.
Code audit is performed in the whole project aspect: Architecture, Infra, Scaling, Docs, Clean code...
Oftentimes Code Reviews are performed before code is checked in or merged with the main branch, and Code Audits are typically performed afterwards.

Junit - program verification vs whitebox fuzzing?

I understand that program verification is a branch of computer engineering - but that it's practical application to real world code bases is limited by combinatorial explosion.
I also understand that as part of designing your software change, for a modification to an existing Java framework, it's helpful to think about whitebox, boundary and blackbox tests for your algorithm, in advance. (Some people call this hammock driven development - thinking before you code.)
Assuming you take this thinking and embed it in junit style tests, I'm assuming that the Computer Science name for the contents is strictly 'whitebox testing/fuzzing' and not sufficient to comprise 'program verification'.
So my question is - junit tests - whitebox fuzzing or program verification?
Program verification is done proving mathematical properties on a mathematical model which is related to your application (it can be derived from the formal semantic of the programming language or by hand, like writing behavioral types that models your web service).
Take a look at pi-calculus to understand what I mean.
Of course, junit has nothing to do with formal program verification.

What is Code Coverage?

I have 3 questions :
What is CodeCoverage ?
What is it good for ?
What tools are used for
analyzing Code Coverage ?
You can get very good information from SO WEB SITE
Free code coverage tools
What is Code Coverage and how do YOU measure it?
Code Coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.CC is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good CC tools will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during particular test.
Code coverage algorithms were first created to address the problem of assessing a source code by looking directly at the source code. Code coverage belongs to the structural testing category because of the assertions made on the internal parts of the program and not on system outputs. Therefore code coverage aims at finding parts of the code that are not worth testing.
http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=7580
alt text http://www.codecoveragetools.com/images/stories/software_lifecycle.jpg
Its Good for
Functional coverage aiming at finding how many functions or procedures were executed.
Statement or line coverage which identifies the number of lines in the source code has been executed.
Condition coverage or decision coverage answers the question about the number of loop conditions were executed in the program.
Path coverage which focuses on finding all possible paths from a given starting point in the code has been executed.
Entry and exit coverage which finds how many functions (C/C++, Java) or procedures (Pascal) were executing from the beginning to the end.
TOOLS
http://www.codecoveragetools.com/
http://java-source.net/open-source/code-coverage
http://www.codecoveragetools.com/index.php/coverage-process/code-coverage-tools-java.html
http://open-tube.com/10-code-coverage-tools-c-c/
http://csharp-source.net/open-source/code-coverage
http://www.kdedevelopers.org/node/3190
From wikipedia article
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing1. Currently, the
use of code coverage is extended to
the field of digital hardware, the
contemporary design methodology of
which relies on Hardware description
languages (HDLs).
Advocating the use of code coverage
A code coverage tool simply keeps
track of which parts of your code get
executed and which parts do not.
Usually, the results are granular down
to the level of each line of code. So
in a typical situation, you launch
your application with a code coverage
tool configured to monitor it. When
you exit the application, the tool
will produce a code coverage report
which shows which lines of code were
executed and which ones were not. If
you count the total number of lines
which were executed and divide by the
total number of lines which could have
been executed, you get a percentage.
If you believe in code coverage, the
higher the percentage, the better. In
practice, reaching 100% is extremely
rare.
The use of a code coverage tool is
usually combined with the use of some
kind of automated test suite. Without
automated testing, a code coverage
tool merely tells you which features a
human user remembered to use. Such a
tool is far more useful when it is
measuring how complete your test suite
is with respect to the code you have
written.
Related articles
The Future of Code-Coverage Tools
The effectiveness of code coverage tools in software testing
Tools
Open Source Code Coverage Tools in Java
Code coverage is a metric, showing how "well" the source code is tested. There are several types of code coverage: line coverage, function coverage, branch coverage.
In order to measure the coverage, you shall run the application either manually or by automated test.
Tools can be divided in two categories:
- the ones that run the compiled code in a modified environment (like the debugger), counting the required points (functions, lines, etc.);
- the ones that require special compilation - in this case the resulting binary already contains the code which actually does the counting.
There are several tools for measuring and visualizing the result, they depend from platform, from source code's language.
Please read article on Wikipedia
To provide you tools, please define for which OS and language do you use.
Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested.
http://en.wikipedia.org/wiki/Code_coverage
The wikipedia definition is pretty good, but in my own words code coverage tells you how much automated testing you have accounted for. 100% would mean that ever single line of code in your application is being covered by a unit test.
NCover is an application for .NET
The term refers to how well your program is covered by your tests. See the following wikipedia article for more info:
http://en.wikipedia.org/wiki/Code_coverage
The other answers already cover what Code Coverage is. The think I'd like to stress is that you need to be careful not to treat high coverage as implicitly meaning you've tested all scenarios. It doesn't necessarily say how well you've tested the code or the quality of your tests, just that you've hit a certain percentage of code as part of the tests running.
High Code Coverage does not necessarily mean High Test Quality, but High Test Quality does mean High Code Coverage
In practice, I usually aim for 90-95% code coverage which is often achievable. The last few % are often too expensive to be worth trying to hit.
There are many ways to develop applications. One of those is "Extreme Programming" or "Test Driven Design (TDD)". It states that all code should be tested. Code Coverage is a means of measuring how much is tested.
I'd like to make a small remark about this: I don't think all code should be tested, nor that one should set a specific percentage of code coverage. Neither do I think that code shouldn't be tested with Unit Tests (code testing code). I do think one should decide what makes sense to test. Due to this reason I generally don't use code coverage.
One thing that some tools provide, is highlight the parts that are tested. This way you might run into some code that isn't tested but actually should be, which is the only thing I use it for.
Good answers.
My two cents is that there is no method of testing that catches all errors, but less testing will never catch more errors, so any testing is good. To my mind, coverage testing is not to show what code has been exercised, but to show what code has not been exercised, because that is where bugs love to lurk.
If you combine it with single-stepping, it is a very good way to review code and catch bugs. Here's an example.
Another useful tool for ensuring code quality(which encompasses code coverage) that I recently used is Sonar.
Here is the link http://www.sonarqube.org/

Semantic Diff Utilities [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm trying to find some good examples of semantic diff/merge utilities. The traditional paradigm of comparing source code files works by comparing lines and characters.. but are there any utilities out there (for any language) that actually consider the structure of code when comparing files?
For example, existing diff programs will report "difference found at character 2 of line 125. File x contains v-o-i-d, where file y contains b-o-o-l". A specialized tool should be able to report "Return type of method doSomething() changed from void to bool".
I would argue that this type of semantic information is actually what the user is looking for when comparing code, and should be the goal of next-generation progamming tools. Are there any examples of this in available tools?
We've developed a tool that is able to precisely deal with this scenario. Check http://www.semanticmerge.com
It merges (and diffs) based on code structure and not using text-based algorithms, which basically allows you to deal with cases like the following, involving strong refactor. It is also able to render both the differences and the merge conflicts as you can see below:
And instead of getting confused with the text blocks being moved, since it parses first, it is able to display the conflicts on a per method basis (per element in fact). A case like the previous won't even have manual conflicts to solve.
It is a language-aware merge tool and it has been great to be finally able to answer this SO question :-)
Eclipse has had this feature for a long time. It's called "Structure Compare", and it's very nice. Here is a sample screenshot for Java, followed by another for an XML file:
(Note the minus and plus icons on methods in the upper pane.)
To do "semantic comparisons" well, you need to compare the syntax trees of
the languages, and take into account the meaning of symbols. A really
good semantic diff would understand the language semantics, and realize
when one block of code was equivalent in function to another. Going
this far requires a theorem prover, and while it would be extremely
cute, isn't presently practical for a real tool.
A workable approximation of this is simply comparing syntax trees, and reporting
changes in terms of structures inserted, deleted, moved, or changed.
Getting somewhat closer to a "semantic comparison", one could report
when an identifier is changed consistently across a block of code.
See our http://www.semanticdesigns.com/Products/SmartDifferencer/index.html
for a syntax tree-based comparison engine that works with many languages, that does
the above approximation.
EDIT Jan 2010: Versions available for C++, C#, Java, PHP, and COBOL.
The website shows specific examples for most of these.
EDIT May 2010: Python and JavaScript added.
EDIT Oct 2010: EGL added.
EDIT Nov 2010: VB6, VBScript, VB.net added
What you're groping for is a "tree diff". It turns out that this is much harder to do well than a simple line-oriented textual diff, which is really just the comparison of two flat sequences.
"A Fine-Grained XML Structural Comparison Approach" concludes, in part with:
Our theoretical study as well as our experimental evaluation
showed that the proposed method yields improved structural similarity results with
respect to existing alternatives, while having the same time complexity (O(N^2))
(emphasis mine)
Indeed, if you're looking for more examples of tree differencing I suggest focusing on XML since that's been driving practical developments in that area.
Shameless plug for my own project:
HTML Tree Diff does structure-aware comparison of xml and html documents, written in python.
http://pypi.python.org/pypi/html-tree-diff/0.1.0
The solution to this would be on a per language basis. I.e. unless it's designed with a plugin architecture that defers a lot of the parsing of the code into a tree and the semantic comparison to a language specific plugin then it will be very difficult to support multiple languages. What language(s) are you interested in having such a tool for. Personally I'd love one for C#.
For C# there is an assembly diff add-in to Reflector but it only does a diff on the IL not the C#.
You can download the diff add-in here [zip] or go to the project on the codeplex site here.
A company called Zynamics offers a binary-level semantic diff tool. It uses a meta-assembly language called REIL to perform graph-theoretic analysis of 2 versions of a binary, and produces a color-coded graph to illustrate differences between them. I am not sure of the price, but I doubt it is free.
http://prettydiff.com/
Pretty Diff minifies each input to remove comments and unnecessary white space and then beautifies the code prior to the diff algorithm. I cannot think of anyway to become more code semantic than this. And, its written JavaScript so it runs directly in the browser.

which were your achievements in programming in 2008? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
which were your achievements in programming in 2008?
what technologies surprise you or learn this year and what do you expect in programming terms in 2009
Edit:
Changed to Wiki
I wrote 2 VB.NET language features that will ship as part of VS 2010.
I designed a programing language called Liberty,
However, I've only implemented a small fraction of it. I stopped working on it so that I could concentrate on building a profitable software company. My original intent was to market the language (actually an IDE for it) as my first product, but the economics of programing languages being as they are, I decided to pick something else for my company's first product. I've been thinking about turning it into an open source project. If the statement "A programing language that feels like LISP, but looks like C#..." has any appeal to you, and you are interested in working on an open source .NET compiler, let me know.
I started my own software company
I've designed and implemented most of my company's first product "Transactor Code Agent", which should be shipping in Q1 2009. I've been billing it as a "Disaster Recovery Tool for Programmers".
It's a tool that provides automatic local version history for source code. You point it at the folders that contain your source, and then anytime you make a change to file it automatically creates a backup for you. It's meant to be a compliment to existing source control setups, by protecting all the "broken", "in-progress" work that you usually don't check into source control.
By the way, we are looking for beta-testers. If you are interested let me know.
After Scott's outing I would feel deep shame in confessing what I achieved in 2008.
I made one of my "flagship" applications better by removing features from it.
For the first time, I sold my work to a general audience, via the App Store. In so doing, I:
Reached over five times as many users as my most widely-used previous work (26000+ instead of 5000+)
Made more than three times as much money as my most lucrative previous work (a Google Summer of Code grant in 2005)
Learned two new environments (Objective-C/Cocoa Touch and Ruby/Rails) after sticking with Perl for many years
Disciplined myself enough to get the boring bits done
Learned what it meant to be responsible to thousands of people
But perhaps most importantly, I made beautiful things that I could be proud of.
In 2009 (or maybe late '08) I'll release a new product that I hope will push all of that even farther, and maybe even be a best-in-class solution for a problem everybody faces.
Helped push another release towards the door (not quite there yet)
Presented a paper on accelerating the Hough Transform at WorldComp
Averaged just shy of one blog post per week
Built up hope of catching John Skeet in reputation
Did a huge set of bizarro work with reflection and dynamic code generation
Gave up all hope of catching John Skeet in reputation
Managed three employees, more or less successfully
I decided I would learn a new language, nothing specific at the time, since then I have learned Python.
This coming year, I would like to learn another language, preferably something like c++ or maybe just maybe (I'm a *nix kinda' guy) Ill try a Microsoft stack with something like .net but we'll see what happens.
Improved interviewing skills. I am now better able to discern good and bad applicants through better questions, including small whiteboard coding sessions.
I am up to speed with Drupal, though lots still to learn. First time really working with a good framework.
2009, maybe i'll get around to doing some lisp funness
I opened myself to the world of Dynamic Languages and Functional Languages. I can read programs that dont resemble a C++ or C# kind of code with {} and ;. In the process developed better understanding to patterns like MVC.
I needed to learn PF early this December as our existing firewall solution was woefully underpowered for an industrial application, but we didn't have the dough for the "professional" solutions (i.e. ci$cso stuff).
So I ended up taking my existing OpenBSD box in the server stack and turn it into the firewall using PF. Since the system uses multiple servers and multiple IP's (some on domains), I needed a combination of NAT, RDR and the usual RULES.
It's certainly not as sexy as learning APL or LISP (or Ruby etc.) for fun, but it was necessary and urgent.
The new firewall is performing beautifully and I don't have to reset the horrid little firewall appliances twice a week anymore (which had to be done remotely which also was not fun). :-)
Cheers,
-Richard
Well, I've builded a big site (for some project) and learned java, now I want to learn C for coming year.
I released my first program into the wild world of the Internet.
I went outside my .NET bubble by creating the previously mentioned program in Objective-C.
I built a pretty cool string extractor utility and corresponding processing library to facilitate automatic localization of string resources in a native C++ application without refactoring the code to extract the strings from where they were used, with the added benefit of allowing cross-language string pooling of localized strings.
I also built a cool operator_cast<> function (with some help from the SO community) to help codify programming intent when using custom casting operators.
1- I made changes to a International Wine Contest Software that i previously wroted. it was changed because the a new sponsor have a different logic in the contest we were notified about the changes 3 days before, so a friend and I code like 2 days in a row,literally running from work to the contest in order to provide support. at the end everything was Flawless
2.- Released my first program for Sales and inventory for Video game retailer
3.- Start my Coding Blog
both in .Net of Course