clearcase - Error while creating a view - configuration

I am getting the below error while i am trying to create view :
CRMSV3062: Problems performing setcs.
CCRC WAN Server: Error: Syntax error on line 2, near column 2, file "/var/tmp/ccrctemp/tmp21436".
CCRC WAN Server: Error: Config spec parsing failed: "/var/tmp/ccrctemp/tmp21436".
My config spec is :
element * CHECKEDOUT
element  * .../CCR_PPD_LINEITEM/LATEST
element  *  .../ccrportal/LATEST -mkbranch CCR_PPD_LINEITEM
element * /main/LATEST  -mkbranch ccrportal

I am working on desktop... it is working on laptops.
Check your difference in term of:
user rights (what is your CLEARCASE_PRIMARY_GROUP on both machines?)
view protection (check what returns cleartool lsview -l -full -pro -cview, in both machines)
Try with a simple config spec on the desktop:
element * CHECKEDOUT
element * /main/LATEST
Make sure that config spec at least can be set.
Then work your way back to your intended config spec, by added bits to it, up until you determine what exact part of that target makes the setcs fail.

Related

File component's [consumer.]bridgeErrorHandler in conjunction with startingDirectoryMustExist

I have a route (with Camel 2.23.1) like:
from("file://not.existing.dir?autoCreate=false&startingDirectoryMustExist=true&consumer.bridgeErrorHandler=true")
.onException(Exception.class)
.handled(true)
.log(LoggingLevel.ERROR, "...exception text...")
.end()
.log(LoggingLevel.INFO, "...process text...")
...
(I tried it with just &bridgeErrorHandler, too, since according to the latest doc the consumer. prefix seems to be not necessary any longer.)
According to the doc of startingDirectoryMustExist:
| startingDirectoryMustExist | [...] Will thrown an exception if the directory doesn’t exist. |
the following exception is thrown:
org.apache.camel.FailedToCreateRouteException: Failed to create route route1:
Route(route1)[[From[file://not.existing.dir?autoCreate=false...
because of Starting directory does not exist: not.existing.dir
...
but, despite of the doc and the description of [consumer.]bridgeErrorHandler it's propagated to the caller, i.e neither "exception text" nor "process text" are printed.
There is a unit test FileConsumerBridgeRouteExceptionHandlerTest that covers consumer.bridgeErrorHandler, so I think this works basically. Can it be that [consumer.]bridgeErrorHandler doesn't work in conjunction with the exception thrown by startingDirectoryMustExist?
Do I have to write my own [consumer.]exceptionHandler as mentioned in this answer to "Camel - Stop route when the consuming directory not exists"?
There also a post on the mailing list from 2014 that reports similar behaviour with startingDirectoryMustExist and consumer.bridgeErrorHandler.
UPDATE
After TRACEing and debugging through the code I found that the exception is propagated as follows:
FileEndpoint.createConsumer()
throw new FileNotFoundException(...);
--> RouteService.warmUp()
throw new FailedToCreateRouteException(...)
--> DefaultCamelContext.doStart()
(re)throw e
--> ServiceSupport.start()
(re)throw e
I couldn't find any point where bridgeErrorHandler comes into play.
Setting breakpoints on BridgeExceptionHandlerToErrorHandler's constructor and all of its handleException() methods doesn't stop at any of them.
Am I still missing something?
You should use the directoryMustExist option instead, then you can have the error during polling, which is where the bridge error handler can be triggered. The startingDirectoryMustExist option is checked during creating the consumer and therefore before the polling and where the bridge error handler operates.
See also the JIRA ticket: https://issues.apache.org/jira/browse/CAMEL-13174

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

Operation not allowed after ResultSet closed in solr import

I encountered an error while doing full-import in solr-6.6.0.
I am getting exception as bellow
This happens when I set
batchSize="-1" in my db-config.xml
If I change this value to say batchSize="100" then import will run without any error.
But recommended value for this is "-1".
Any suggestion why solr throwing this exception.
By the way the data am trying to import is not huge, data am trying to import is just 250 documents.
Stack trace:
org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:516)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:415)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:474)
at org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:457)
at java.lang.Thread.run(Thread.java:745)
By the way am getting one more warning:
Could not read DIH properties from /configs/state/dataimport.properties :class org.apache.zookeeper.KeeperException$NoNodeException
This happens when config directory is not writable.
How can we make config directory writable in solrCloud mode.
Am using zookeeper as watch-dog. Can we go ahead and change permission of config files which are there is zookeeper?
your help greatly appreciated.
Using fetchSize="-1" is only recommended if you have problems running without it. Its behaviour is up to the JDBC driver, but the cause of people assuming its recommended is this sentence from the old wiki:
DataImportHandler is designed to stream row one-by-one. It passes a fetch size value (default: 500) to Statement#setFetchSize which some drivers do not honor. For MySQL, add batchSize property to dataSource configuration with value -1. This will pass Integer.MIN_VALUE to the driver as the fetch size and keep it from going out of memory for large tables.
Unless you're actually seeing issues with the default values, leave the setting alone and assume your JDBC driver does the correct thing (.. which it might not do with -1 as the value).
The reason for dataimport.properties having to be writable is that it writes a property for the last time the import ran to the file, so that you can perform delta updates by referencing the time of the last update in your SQL statement.
You'll have to make the directory writable for the client (solr) if you want to use this feature. My guess would be that you can ignore the warning if you're not using delta imports.

Chef - override node attribute

We have recipe which use node attribute:
python_pip 'request_proxy' do
virtualenv '/opt/proxy/.env'
version node.default['request-proxy']['version']
Chef::Log.info('Auth request proxy #{version}')
action :install
end
Attribute is set on node level and everything is OK, but for test purposes i want to override it in my local (kitchen/vagrant) node. As first step i add attribute to my .kitchen.yml:
suites:
- name: default
run_list:
- recipe[proxy_install]
attributes: {request-proxy: {'version': '1.0.8'}}
Unfortunately node still use the "default" version. Everything works fine, without any error and completly ignores my attributes.
Later i tried add this to parameter file (chef-client -j params.json) on production node, result was the same.
What I missed? What am I doing wrong?
P.S. Chef::Log.info('Auth request proxy #{version}') is also completely ignored ??
Can you try using YAML? kitchen.yml is not a JSON file, so I'm not sure that your JSON embedded inside it would work.
attributes:
request-proxy:
version: '1.0.8'
Also, you probably should not be using node.default, unless you want to pick up the default value only (and never any overrides). If you want to use the attribute precedence (default, normal, override, force) in Chef, you should be doing:
node['request-proxy']['version']
Finally, you also have a single-quoted string with a variable. This will never work like you expect in Ruby (are you running rubocop? It would have caught this). Try it with double quotes, and remove it from the middle of your resource:
Chef::Log.info("Auth request proxy #{version}")

Has KeyRegenerationInterval any effect in SSH2?

I am setting up a new Linux-Server and I am editing sshd_config. I will use protocol version 2 (which is default anyway):
Protocol 2
But in the default config-file I also find this two lines:
KeyRegenerationInterval 3600
ServerKeyBits 768
Manpage sshd_config(5) says about KeyRegenerationInterval:
In protocol version 1, the ephemeral server key is automatically
regenerated after this many seconds (if it has been used). The purpose
of regeneration is to prevent decrypting captured sessions by later
breaking into the machine and stealing the keys. The key is never
stored anywhere. If the value is 0, the key is never regenerated. The
default is 3600 (seconds).
So I know what this parameter does in SSH1. But I don't use SSH1. I use the default version SSH2, but the manpage gives no information about the effect of KeyRegenerationInterval in protocol version 2. Has KeyRegenerationInterval any effect in protocol version 2? And what about ServerKeyBits?
What will happen if I leave this settings in the config file when I set Protocol 2? What will happen when I delete those two lines?
I guess that those two parameters are ignored if protocol version is set to 2. But this is just guessed. From what I read until now I can't know for sure. Do you KNOW (not guess) what effect KeyRegenerationInterval and ServerKeyBits have in SSH2?
I'm sure that you already know this. I just didn't want to leave the question unanswered. These options (KeyRegenerationInterval & ServerKeyBits) affect the server key that is generated for SSH protocol 1. You should not have to worry about this if you demand that your connections adhere to protocol 2.
TL;DR: No, these options have no effect in SSH-2 (and SSH-1 support is removed since 2016).
When unsure, source code is the best documentation.
If we search for ServerKeyBits and KeyRegenerationInterval in the entire OpenSSH source code, we find only this in servconf.c:
{ "serverkeybits", sDeprecated, SSHCFG_GLOBAL },
. . .
{ "keyregenerationinterval", sDeprecated, SSHCFG_GLOBAL },
. . .
case sDeprecated:
case sIgnore:
case sUnsupported:
do_log2(opcode == sIgnore ?
SYSLOG_LEVEL_DEBUG2 : SYSLOG_LEVEL_INFO,
"%s line %d: %s option %s", filename, linenum,
opcode == sUnsupported ? "Unsupported" : "Deprecated", arg);
while (arg)
arg = strdelim(&cp);
break;
In other words, both options simply print a deprecation warning and have further no effect.
Then using the blame feature we find that the options were removed in the commit c38ea6348 of Aug 23, 2016 (OpenSSH 7.4p1):
Remove more SSH1 server code: * Drop sshd's -k option. *
Retire configuration keywords that only apply to protocol 1, as well as the
"protocol" keyword. * Remove some related vestiges of protocol 1 support.
Before that they were used only for SSH-1. E.g. KeyRegenerationInterval:
{ "keyregenerationinterval", sKeyRegenerationTime, SSHCFG_GLOBAL },
. . .
case sKeyRegenerationTime:
intptr = &options->key_regeneration_time;
goto parse_time;
Used in sshd.c/L1442:
if ((options.protocol & SSH_PROTO_1) &&
key_used == 0) {
/* Schedule server key regeneration alarm. */
signal(SIGALRM, key_regeneration_alarm);
alarm(options.key_regeneration_time);
key_used = 1;
}
Note: for SSH-2 there's a more powerful RekeyLimit.
For proto 2, there's this :
RekeyLimit
Specifies the maximum amount of data that may be transmitted
before the session key is renegotiated, optionally followed a
maximum amount of time that may pass before the session key is
renegotiated. The first argument is specified in bytes and
may have a suffix of 'K', 'M', or 'G' to indicate Kilobytes,
Megabytes, or Gigabytes, respectively.
The default is between '1G' and '4G', depending on the cipher.
The optional second value is specified in seconds and may use
any of the units documented in the TIME FORMATS section.
The default value for RekeyLimit is default none, which
means that rekeying is performed after the cipher's default
amount of data has been sent or received and no time based
rekeying is done.
Source :
https://www.freebsd.org/cgi/man.cgi?sshd_config(5)