MQTTSource Operator compilation error in IBM Infosphere Streams - infosphere-spl

System : Running VMPlayer on Windows Server. One VM is playing image of IBM Infosphere Streams (3.2) QuickStart Edition and Other MessageSight(1.1) Virtual Appliance.
When using MQTTSource Operator I get following compilation error
1. "make: *** No rule to make target `/home/streamsadmin/sdk/clients/c/include/MQTTAsync.h', needed by `build/operator/mqttStream.o'. Stop.
2. CDISP0141E ERROR: The compilation of the generated code failed."
The sdk path is the SDK of IBM MessageSight. I am trying to connetc streams with MessageSight using MQTT operator. Is the problem with SDK or with my code? Please find the code below.
composite MQTTtestApp {
graph
(stream<blob demoData> mqttStream;stream<rstring errorMessage> myErrorStream) = MQTTSource()
{
param
serverURI : "192.168.206.130:1883";
topics : ["DemoMessagingPolicy"];
format: block;
output
myErrorStream : errorMessage = getError();
}
stream<rstring dataSchema> ParsedMsg = Parse(mqttStream)
{
param
format :csv;
}
() as myMessageSink = FileSink(ParsedMsg)
{
param
file : "data.csv";
format : csv;
}

In this version of the operator, you will need to do the following to get the code to compile:
1) Download the MQTT client and have it installed on the VM. See this link for details: http://www-01.ibm.com/support/knowledgecenter/SSCRJU_3.2.0/com.ibm.swg.im.infosphere.streams.messaging-toolkit.doc/doc/msgtoolkit-reqs.html?lang=en
2) Set he STREAMS_MESSAGING_MQTT_HOME environment variable. This environment should point to the install location of the MQTT client
There is a newer version of the MQTT operators that is easier to set up. In the new version, the MQTT client is included as part of the package. Therefore, you will no longer need to install the client separately, or set the environment variable.
See this project for details.
http://ibmstreams.github.io/streamsx.messaging/

Related

Couchbase Java SDK times out with BUCKET_NOT_AVAILABLE

I am doing a lookup operation Couchbase Java SDK 3.0.9 which looks like this:
// Set up
bucket = cluster.bucket("my_bucket")
collection = bucket.defaultCollection()
// Look up operation
val specs = listOf(LookupInSpecStandard.get("hash"))
collection.lookupIn(id, specs)
The error I get is BUCKET_NOT_AVAILABLE. Here are is the full message:
com.couchbase.client.core.error.UnambiguousTimeoutException: SubdocGetRequest, Reason: TIMEOUT {"cancelled":true,"completed":true,"coreId":"0xdb7f8e4800000003","idempotent":true,"reason":"TIMEOUT","requestId":608806,"requestType":"SubdocGetRequest","retried":39,"retryReasons":["BUCKET_NOT_AVAILABLE"],"service":{"bucket":"export","collection":"_default","documentId":"export:main","opaque":"0xcfefb","scope":"_default","type":"kv"},"timeoutMs":15000,"timings":{"totalMicros":15008977}}
The strange part is that this code hasn't been touched for months and the lookup broke out of a sudden. The CB cluster is working fine. Its version is
Enterprise Edition 6.5.1 build 6299.
Do you have any ideas what might have gone wrong?
Note that in Couchbase Java SDK 3.x, the Cluster::bucket method returns instantly, and continues opening a bucket in the background. So the first operation you perform - a lookupIn here - needs to wait for that resource opening to complete before it can proceed. It looks like it took a little longer to access the Couchbase bucket than usual and you got a timeout.
I recommend using the Bucket::waitUntilReady method after opening a bucket, to block until the resource opening is complete:
bucket = cluster.bucket("my_bucket")
bucket.waitUntilReady(Duration.ofMinutes(1));
This problem can occur because of firewall. You need to allow these ports.
Client-to-node
Unencrypted: 8091-8097, 9140 [3], 11210
Encrypted: 11207, 18091-18095, 18096, 18097
You can check more from below
https://docs.couchbase.com/server/current/install/install-ports.html#_footnotedef_2

AWS .NET SDK on Linux

I am currently moving an ASP.NET application made by a third party from Windows to Linux. I read the documentation and nothing indicates this should be a problem, but sadly
var profile = new CredentialProfile(profileName, credentials) {
Region = RegionEndpoint.EUWest1
};
var netSDKFile = new NetSDKCredentialsFile();
netSDKFile.RegisterProfile(profile);
throws the following exception
Unhandled Exception: Amazon.Runtime.AmazonClientException: The encrypted store is not available. This may be due to use of a non-Windows operating system or Windows Nano Server, or the current user account may not have its profile loaded.
at Amazon.Util.Internal.SettingsManager.EnsureAvailable()
at Amazon.Runtime.CredentialManagement.NetSDKCredentialsFile..ctor()
Is the Amazon .NET SDK(or a part of it) not supported on Linux? If that is the case, is there a possible workaround?
For the most part there is very little that isn't supported on Linux that is supported on Windows. Off of the top of my head I can't think of anything besides NetSDKCredentialsFile which is due to the fact it uses Win32 API to encrypt credentials.
You can use SharedCredentialsFile to register a profile in the credentials file stored under ~/.aws/credentials. This is the same credential stored supported by all of the other AWS SDK and Tools.
Following on from Norm's answer, I found this resource that explained how to use Shared Credentials: https://medium.com/#somchat/programming-using-aws-net-sdk-9ce3f5119633
This is how I was previously using NetSDKCredentials, which won't work for Linux/Mac OS:
//Try this code on a non-Windows platform and you will see the above error
var options = new CredentialProfileOptions
{
AccessKey = "access_key",
SecretKey = "secret_key"
};
var profile = new CredentialProfile("default", options);
profile.Region = RegionEndpoint.USWest1;
NetSDKCredentialsFile file = new NetSDKCredentialsFile();
file.RegisterProfile(profile);
But I was then able to use this example to use SharedCredentials:
var credProfileStoreChain = new CredentialProfileStoreChain();
if (credProfileStoreChain.TryGetAWSCredentials("default", out AWSCredentials awsCredentials))
{
Console.WriteLine("Access Key: " + awsCredentials.GetCredentials().AccessKey);
Console.WriteLine("Secret Key: " + awsCredentials.GetCredentials().SecretKey);
}
Console.WriteLine("Hello World!");
You'll then be able to see your code is able to access the keys:
Access Key: A..................Q
Secret Key: 8.......................................p
Hello World!
I then used System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform() (as I am using this code on both Windows and Linux), to determine which credentials to use:
using System.Runtime.InteropServices;
//NETSDK Credentials only work on Windows - must use SharedCredentials on Linux
bool isLinux = System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(OSPlatform.Linux);
if (isLinux) {
//Use SharedCredentials
} else {
//Use NetSDKCredentials
}
You may find this section of the AWS documentation helpful, too: https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html#creds-locate

Method not found: Microsoft.AnalysisServices

I have created an ETL setup for a datawarehouse with SSIS packages.
Everything is working fine until the very last step which is a "Analysis Services Processing Task Editor"
Whenever I add my cube and press ok I get the following error:
"Method not found: "'Void Microsoft.AnalysisServices.Commands.ProcessCommand.set_Type(Microsoft.AnalysisServices.ProcessType)'.
I suspect there is some issue with a DLL, but I'm not sure which.
I found a Microsoft.AnalysisServices.dll under my SQL Server install (C:\Program Files (x86)\Microsoft SQL Server\120\SDK\Assemblies)
I did not find it in my Visual Studio installation folders.
I was able to found a workaround.
I was trying to reproduce the problem with the AdventureWorks dataset, to verify that the problem was occuring due to the .dll and not my cube or anything else.
So I created a new OLTP & DWH with Cube from the AdventureWorks dataset and created a new SSIS project where the only step was to process the cube.
With this setup I did not get the same error as I did on the original project, seemingly, there was nothing wrong with the .dll?
However I also tried changing the target server and cube to my original one, and to my surprise it worked!
So i saved that package and imported it into my original project and excecuted the package from there and it works.
EDIT: Please also see Pavel's possbile solution.
Strange thing, if we create new SSIS project in Project deployment mode, leave 2017 version, and deploy it on our 2016 SSIS services - all is working perfectly fine. So... we just need to migrate to the Project deployment mode ))) – Pavel Botygin
We have same problem.
One interesting thing: you can try to switch your project compatibility to SQL Server vNext, then create your processing task normally, clicking OK and other stuff successfully (what a miracle it is!), then switch back to desired version and deploy.
But if you have Script Tasks in the same package - then you can try other workaround (which I'm actually using now). Use a Script Task to populate a variable (User::DimensionsProcessingCommand in my example) for the "Analysis Services Execute DDL Task". It's little complicated way, but very useful in the future.
public void Main()
{
Boolean tstFire = false;
Microsoft.AnalysisServices.Server myServer = new Microsoft.AnalysisServices.Server();
//Get connection to SSAS database from package
ConnectionManager myConn = Dts.Connections["SSAS"];
//Template for future use
String ProcessingCommandTemplate = "<Batch xmlns=\"http://schemas.microsoft.com/analysisservices/2003/engine\"><Parallel>XXXXXXX</Parallel></Batch>";
String myProcessingCommand = "";
//Array for gathering dimensions w/o duplicates
Dictionary<Dimension, Cube> amoDimDictionary = new Dictionary<Dimension, Cube>();
String myServerName = myConn.ConnectionString;
String myDatabaseName = myConn.Properties["InitialCatalog"].GetValue(myConn).ToString();
//Connect to SSAS server instance
myServer.Connect(myServerName);
Database amoDb = myServer.Databases.FindByName(myDatabaseName);
//Get all dimensions used in cubes
foreach (Cube amoCube in amoDb.Cubes)
{
foreach (CubeDimension amoDimension in amoCube.Dimensions)
{
if (!amoDimDictionary.ContainsKey(amoDimension.Dimension))
{
amoDimDictionary.Add(amoDimension.Dimension, amoCube);
}
}
}
//Start XML capture Dimensions
myServer.CaptureXml = true;
foreach (Dimension amoDimension in amoDimDictionary.Keys)
{
if (amoDimension.State == AnalysisState.Unprocessed)
{
amoDimension.Process(ProcessType.ProcessFull);
}
else
{
amoDimension.Process(ProcessType.ProcessUpdate);
}
}
myServer.CaptureXml = false;
//Build command
foreach (String strXML in myServer.CaptureLog)
{
myProcessingCommand = myProcessingCommand + strXML.ToString();
}
myProcessingCommand = ProcessingCommandTemplate.Replace("XXXXXXX", myProcessingCommand);
Dts.Variables["User::DimensionsProcessingCommand"].Value = myProcessingCommand.ToString();
//Command output to see at runtime from VS 2015
Dts.Events.FireInformation(1, "", Dts.Variables["User::DimensionsProcessingCommand"].Value.ToString(), "", 0, ref tstFire);
P.S.
On our DEV machine are installed from scratch: SQL Server 2016, Visual Studio 2015, SSDT 17.1
When we were trying to develop some SSIS packages under SQL Server 2016 compatibility - we stumbled on so many problems... so we just stopped counting them. GAC is googled and tuned back and forth without any result.
14.0 Microsoft DEV Environment seems so buggy and... just broken if you try to create something under 13.0 and lower versions.
This MS Forum post has the following advice.
If it exists, cut and paste the following folder from the GAC to somewhere else.
C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.AnalysisServices.DeploymentEngine
Then, rebuild the project.
Right click on the project
Click properties
Expand Configuration Properties -> select Deployment
At The Target Server : Enter \ < SSAS Instace Name> ( Make Sure the SSAS Server Is Multidimensional SSAS Instance )
Click Apply
Save Project. Restart the IDE ( SSDT )
Start The SSDT -> Select The Project - > And Rebuilt Solution \ Rebuilt Project.
Test again.
This seems to be because of a deployment version that is not supported by VS-2015 components.
Go to Project-> Properties
In Configuration Properties -> General
Set TargetServerVersion To SQL Server 2017

Couchbase find() error

As per the documentation teaches here under "Reading NoSQL Documents" part.
I copied the exact same code below.
UserModel.find({}, function(error, result) {
if(error) {
console.log("An error happened -> " + JSON.stringify(error));
}
// Do something with the resulting Ottoman models
});
But it's giving me the error
TypeError: First argument needs to be a ViewQuery, SpatialQuery or N1qlQuery.
Why is it producing the error? And what are ViewQuery, SpatialQuery and N1qlQuery?
This is a known issue with regard to using different versions of the Couchbase Node.js SDK than Ottoman.js is using internally. You can either fork Ottoman.js and upgrade the SDK version it is using internally, or you can downgrade your application to match Ottoman.js. This will be resolved in the next release of the Couchbase Node.js SDK and Ottoman.js (it will allow you to specify which version to use internally).

Retrieving selenium logs and screenshots from grid back in Intern

There are two parts to my question in regards to Intern workflow in case of exception:
1- Per Selenium Desired Capabilities specifications, RemoteWebDriver captures screentshots on exceptions by default (unless it is disabled by setting webdriever.remote.quiteExceptions.) Is it possible to retrieve these screenshots in Intern?
2- I have set up a Selenium Grid with multiple platforms/browsers and can execute Intern tests on the grid successfully. However I am trying to gather the logs back in my Intern environment so that I don’t have to sign on to each machine on the grid to see the logs. I am particularly interested in server, driver, and browser logs based upon selenium logging guide. I tried adding the following Intern configurations using the Selenium Desired Capabilities guide but wasn't able to get any logs:
capabilities: {
'selenium-version': '2.39.0',
'driver': 'ALL',
'webdriver.log.driver':'INFO',
'webdriver.chrome.logfile': 'C:\\intern\\logs \\chromedriver.log',
'webdriver.firefox.logfile':'C:\\intern \\logs\\firefox.log'
To get a screenshot yourself you can call remote.takeScreenshot().then(function (base64Png) {}), but there is no way that I am aware of to retrieve the automatically generated screenshots—there appears to be nothing in the WebDriver JsonWireProtocol to do so.
To retrieve logs, you can call remote.log(typeOfLog).then(function (logs) {}). See the JsonWireProtocol on log for more information on what you get back.
There is a way to capture automatically generated screenshots. Using a custom reporter (https://github.com/theintern/intern/wiki/Using-and-Writing-Reporters#custom-reporters) I was able to save a screen shot and log browser console logs into a file.
As mentioned in the link above, when the '/test/fail' topic callback is called, it passes in a test object. If the webdriver had failed internally, this object will have a 'test.error.cause.value.screen' variable present in it. This is the variable that stores the webdriver generated screenshot. So the following is what I did:
if (test.error.cause.value.screen) {
//Store this variable into a file using node's fs library
}
If you look at the error object, you will also get to see more error information that the webdriver has logged.
Regarding the browser logs, #C Snover has hit the nail on that one. But that information is only available inside the remote object. This object is available when the '/session/start' topic callback is called. So what I did is I created a map that mapped the session ID from the remote object to the remote object itself. And luckily, the test object has the session ID in it too. So, I retrieved the remote object from my map using test.sessionId as the key to the map and logged the browser logs too. So in short this is what I did:
'/session/start': function (remote) {
sessions[remote.sessionId] = { remote: remote };
},
'/test/fail': function (test) {
var remote = sessions[test.sessionId].remote;
remote._wd.log('browser', function (err, logs) {
//Store the logs array into a file using node's fs library
});
}