How to store a dataset retrieved with Siphon and the ncss service - python-siphon

I use the Siphon library to retrieve a subset (using the ncss service) of a remote NetCDF dataset. Is there a convenient way of storing the result in a local .nc file or do I have to create a new Dataset for writing and then clone everything in there?

If you have a query (query) and client (ncss) set up, then you can do something like this:
with open('mydata.nc', 'wb') as outfile:
outfile.write(ncss.get_data_raw(query))

Related

I want to compare the data I have in csv file to the data which is in ldap produciton server

I want to compare the data I have in csv file to the data which is in ldap produciton server.
There are thousands of users data in csv file and i want to compare the data with the data in production server.
Let's suppose I have user ID xtz12345 in the csv file with uid number 123456. Now I want to cross check the uidNumber of the same user ID xtz12345 in the production server.
Is there any way I can automate this? There are thousands of UserID to be checked and if i do it manually it probably gonna take a lot of time. Can anyone suggest what should I do?
Powershell script is good start place.
import activedirectory module (assuming Windows ADdownload and install RSAT tools, here) in Powershell to fetch information from AD, example
use import-csv in powershell to read csv values. Now, compare first with second. example
Happy to help

What is an alternative to CSV data set config in JMeter?

We want to use 100 credentials from .csv but I would rather like to know if there is any other alternative to this available in jmeter.
If you have the credentials in the CSV file there are no better ways of "feeding" them to JMeter than CSV Data Set Config.
Just in case if you're still looking for alternatives:
__CSVRead() function. The disadvantage is that the function reads the whole file into memory which might be a problem for large CSV files. The advantage is that you can choose/change the name of the CSV file dynamically (in the runtime) while with the CSV Data Set Config it has to be immutable and cannot be changed once it's initialized.
JDBC Test Elements - allows fetching data (i.e. credentials) from the database rather than from file
Redis Data Set - allows fetching data from Redis data storage
HTTP Simple Table Server - exposes simple HTTP API for fetching data from CSV (useful for distributed architecture when you want to ensure that different JMeter slaves will use the different data), this way you don't have to copy .csv file to slave machines and split it
There are few alternatives
JMeter Plugin for reading random CVS data : Random CSV Data Set Config
JMeter function : __CSVRead
Reading CSV file data from a JSR223 Pre Processor
CSV Data Set Config is simple, easier to user and available out of the box.

How can I pass multiple CSV files in a directory with same column headers to single REST API in JMETER and test with 1000 users

Test scenario: The folder contains multiple CSVs. Columns are same in all the CSVs.I have to pass multiple csv files one after the other to the single REST API (GET CALL).
Each user (Total 1000 users) should get assigned a set of records/rows from csv file currently in use.
I am new to the JMeter and finding a solution using the CSV Data Set Config. And I realize I could not pass multiple csv files using this.
I also see that __CSVRead() function but I could not pass dynamically the csv file using BeanShell scripting.
Can someone please help me with this?
The CSV file names from the folder can be read one by one using Directory Listing Config plugin
Depending on the CSV file nature you might want to use either __CSVRead() or __StringFromFile() functions directly in your HTTP Request sampler, you don't need to go for any scripting.

select library in SAS from SSIS

I am using SSIS to extract some data out of an SAS server.
using this connection setup (SAS IOM Data Provider 9.3)
I can get the connection to read the default Library/Shared data folder.
What do I need to change/set to get it to read a different library?
These are the properties of the libraries:
The one on the left is the one I can read, the one on the right is the one I am trying to access.
If your data folder contains *.sas7bdat files then you could use this:
http://microsoft-ssis.blogspot.com/2016/09/using-sas-as-source-in-ssis.html
Simply write your SAS libname statement inside the SAS Workspace Init Script box, eg as follows:
libname YOURLIB "/your/path/to/sas/datasets" access=readonly;
More info: http://support.sas.com/kb/33/037.html

Big Query table to be extracted as JSON in Local machine

I have an idea on how to extract Table data to Cloud storage using Bq extract command but I would like rather like to know, if there are any options to extract a Big Query table as NewLine Delimited JSON to Local Machine?
I could extract Table data to GCS via CLI and also download JSON data from WEB UI but I am looking for solution using BQ CLI to download table data as JSON in Local machine?. I am wondering is that even possible?
You need to use Google Cloud Storage for your export job. Exporting data from BigQuery is explained here, check also the variants for different path syntaxes.
Then you can download the files from GCS to your local storage.
Gsutil tool can help you further to download the file from GCS to local machine.
You first need to export to GCS, then to transfer to local machine.
If you use the BQ Cli tool, then you can set output format to JSON, and you can redirect to a file. This way you can achieve some export locally, but it has certain other limits.
this exports the first 1000 line as JSON
bq --format=prettyjson query --n=1000 "SELECT * from publicdata:samples.shakespeare" > export.json
It's possible to extract data without using GCS, directly to your local machine, using BQ CLI.
Please see my other answer for details: BigQuery Table Data Export