Show time after data loading of table - html

I have a HTML table which renders one big JSON object of 1 MB file, I would like inform user that how big the object was and how much time it took to display data in Angular 6 application, any pointer please.
The incoming data could be vary based on service.
The line should say, the object which was binded to table HTML was x MB and it took y second to load data after table finishes of displaying data in HTML.
In face we are doing performance testing of big objects so based on that we are gonna do some stuff from services.

Create variable with current time before making request do API, something like
const requestStart = new Date()
Make request
When response return from API, get Content-Length header (I don't know what library you use to get requests, so I cannot help with how to get headers from it) and assign it to some another variable - it will give you file size
Subtract current date with requestStart date - it will give you time difference

Related

Writing data from MATLAB to firebase database

I am using MATLAB to write data from MATLAB to firebase. I am using following lines of code to do so:
thingSpeakURL = 'https://hybrid-cabinet-265907.firebaseio.com/Ship A/Time Stamp.json';
lat = num2str(42);
lon = num2str(42);
data = struct('lat',lat,'lon',lon);
webwrite(thingSpeakURL,data)
Data is successfully written to Firebase. It is making my original JSON data as a child to a random string been generated on run-time.
For example, my JSON string is {lat: '40',lon:'40'} but instead it is creating a random string, let say, "Mxkkllslsll-1112", making that random string as parent and writing something like {"Mxkkllslsll-1112": lat:'40', lon:'40'} to the firebase database.
Please have a look at following image. It shows that for ship A, I have written data from MATLAB and it is not writing properly(I am facing the problem which I discussed above). I want to make it something like data written for Ship B.
I want to write the data without making any random string as a parent. Kindly assist me in that.
This is because webwrite uses the HTTP POST method by default.
As shown in the Firebase Realtime Database REST API documentation, if you do a POST you will push the data and therefore automatically generate a unique key every time a new child is added to the specified Firebase reference (the -MDJVMk..... value we can see in your question).
You need to use the PUT method.
I don't know matlab but a rapid look at the documentation shows that you need to use the RequestMethod option with a put value, in the weboptions object.
The above pushed me in the right direction (thanks!), and I had success with the following.
CAUTION: The following will overwrite everything in your database!
url = 'https://***.firebaseio.com/.json';
data.users(1) = struct('first','John','last','Locke');
data.users(2) = struct('first','Thomas','last','Hobbes');
data.users(3) = struct('first','Rene','last','Descartes');
headers = {'Content-Type' 'application/json'; 'Accept' 'application/json'};
options = weboptions('RequestMethod', 'put', 'HeaderFields', headers, 'ArrayFormat', 'json');
response = webwrite(url, data, options);
If your data is stored in a .json file (i.e., you don't want to create structures manually in Matlab), you can read it using "fileread" and pass in data as a string (instead of a structure).

Handling big JSONs in Azure Data Factory

I'm trying to use ADF for the following scenario:
a JSON is uploaded to a Azure Storage Blob, containing an array of similar objects
this JSON is read by ADF with a Lookup Activity and uploaded via a Web Activity to an external sink
I cannot use the Copy Activity, because I need to create a JSON payload for the Web Activity, so I have to lookup the array and paste it like this (payload of the Web Activity):
{
"some field": "value",
"some more fields": "value",
...
"items": #{activity('GetJsonLookupActivity').output.value}
}
The Lookup activity has a known limitation of an upper limit of 5000 rows at a time. If the JSON is larger, only 5000 top rows will be read and all else will be ignored.
I know this, so I have a system that chops payloads into chunks of 5000 rows before uploading to storage. But I'm not the only user, so there's a valid concern that someone else will try uploading bigger files and the pipeline will silently pass with a partial upload, while the user would obviously expect all rows to be uploaded.
I've come up with two concepts for a workaround, but I don't see how to implement either:
Is there any way for me to check if the JSON file is too large and fail the pipeline if so? The Lookup Activity doesn't seem to allow row counting, and the Get Metadata Activity only returns the size in bytes.
Alternatively, the MSDN docs propose a workaround of copying data in a foreach loop. But I cannot figure out how I'd use Lookup to first get rows 1-5000 and then 5001-10000 etc. from a JSON. It's easy enough with SQL using OFFSET N FETCH NEXT 5000 ROWS ONLY, but how to do it with a JSON?
You can't set any index range(1-5,000,5,000-10,000) when you use LookUp Activity.The workaround mentioned in the doc doesn't means you could use LookUp Activity with pagination,in my opinion.
My workaround is writing an azure function to get the total length of json array before data transfer.Inside azure function,divide the data into different sub temporary files with pagination like sub1.json,sub2.json....Then output an array contains file names.
Grab the array with ForEach Activity, execute lookup activity in the loop. The file path could be set as dynamic value.Then do next Web Activity.
Surely,my idea could be improved.For example,you get the total length of json array and it is under 5000 limitation,you could just return {"NeedIterate":false}.Evaluate that response by IfCondition Activity to decide which way should be next.It the value is false,execute the LookUp activity directly.All can be divided in the branches.

receive Excel data and turn into objects to format a JSON

I have this solution that helps me creating a Wizard to fill some data and turn into JSON, the problem now is that I have to receive a xlsx and turn specific data from it into JSON, not all the data but only the ones I want which are documented in the last link.
In this link: https://stackblitz.com/edit/xlsx-to-json I can access the excel data and turn into object (when I print document.getElementById('output').innerHTML = JSON.parse(dataString); it shows [object Object])
I want to implement this solution and automatically get the specified fields in the config.ts but can't get to work. For now, I have these in my HTML and app-component.ts
https://stackblitz.com/edit/angular-xbsxd9 (It's probably not compiling but it's to show the code only)
It wasn't quite clear what you were asking, but based on the assumption that what you are trying to do is:
Given the data in the spreadsheet that is uploaded
Use a config that holds the list of column names you want returned in the JSON when the user clicks to download
based on this, I've created a fork of your sample here -> Forked Stackbliz
what I've done is:
use the map operator on the array returned from the sheet_to_json method
Within the map, the process is looping through each key of the record (each key being a column in this case).
If a column in the row is defined in the propertymap file (config), then return it.
This approach strips out all columns you don't care about up front. so that by the time the user clicks to download the file, only the columns you want are returned. If you need to maintain the original columns, then you can move this logic somewhere more convenient for you.
I also augmented the property map a little to give you more granular control over how to format the data in the returned JSON. i.e. don't treat numbers as strings in the final output. you can use this as a template if it suites your needs for any additional formatting.
hope it helps.

Random selection from CSV file in Jmeter

I have a very large CSV file (8000+ items) of URLs that I'm reading with a CSV Data Set Config element. It is populating the path of an HTTP Request sampler and iterating through with a while controller.
This is fine except what I want is have each user (thread) to pick a random URL from the CSV URL list. What I don't want is each thread using CSV items sequentially.
I was able to achieve this with a Random Order Controller with multiple HTTP Request samplers , however 8000+ HTTP Samplers really bogged down jmeter to an unusable state. So this is why I put the HTTP Sampler URLs in the CSV file. It doesn't appear that I can use the Random Order Controller with the CSV file data however. So how can I achieve random CSV data item selection per thread?
There is another way to achieve this:
create a separate thread group
depending on what you want to achieve:
add a (random) loop count -> this will set a start offset for the thread group that does the work
add a loop count or forever and a timer and let it loop while the other thread group is running. This thread group will read a 'pseudo' random line
It's not really random, the file is still read sequentially, but your work thread makes jumps in the file. It worked for me ;-)
There's no random selection function when reading csv data. The reason is you would need to read the whole file into memory first to do this and that's a bad idea with a load test tool (any load test tool).
Other commercial tools solve this problem by automatically re-processing the data. In JMeter you can achieve the same manually by simply sorting the data using an arbitrary field. If you sort by, say Surname, then the result is effectively random distribution.
Note. If you ensure the default All Threads is set for the CSV Data Set Config then the data will be unique in the scope of the JMeter process.
The new Random CSV Data Set Config from BlazeMeter plugin should perfectly fit your needs.
As other answers have stated, the reason you're not able to select a line at random is because you would have to read the whole file into memory which is inefficient.
Rather than trying to get JMeter to handle this on the fly, why not just randomise the file order itself before you start the test?
A scripting language such as perl makes short work of this:
cat unrandom.csv | perl -MList::Util=shuffle -e 'print shuffle<STDIN>' > random.csv
For my case:
single column
small dataset
Non-changing CSV
I just discard using CSV and refer to https://stackoverflow.com/a/22042337/6463291 and use a Bean Preprocessor instead, something like this:
String[] query = new String[]{"csv_element1", "csv_element2", "csv_element3"};
Random random = new Random();
int i = random.nextInt(query.length);
vars.put("randomOption",query[i]);
Performance seems ok, if you got the same issue can try this out.
I am not sure if this will work, but I will anyways suggest it.
Why not divide your URLs in 100 different CSV files. Then in each thread you generate the random number and use that number to identify CSV file to read using __CSVRead function.
CSVRead">http://jmeter.apache.org/usermanual/functions.html#_CSVRead
Now the only part I am not sure if the __CSVRead function reopens the file every time or shares the same file handle across the threads.
You may want to try it. Please share your findings.
A much straight forward solution.
In CSV file, add another column (say B)
apply =RAND() function in the first cell of column B (say B1). This will create random float number.
Drag the cell (say B1) corner to apply for all the corresponding URLs
Sort column B.
your URL will be sorted randomly.
Delete column B.

How To Pass Multiple 'Datas' To A View?

I know that you can pass data to a view using:
navigator.pushView(views.LoadoutView, list.selectedItem)
But what if I pass that data, and then want to pass another piece of data to the same view using a similar method?
Can I get/set a new property or can I write the current data to an xml file as soon as it is received?
Here is a little diagram I made of what I'm trying to achieve (I spent hours on it :P).
From what I've understood, you need to push multiple objects to a view at once, correct?
The data object can only be a single object or object reference, which means that if you want to push more than just your list.selectedItem, create a new Object (a generic one will do) that contains both your properties and push it, much like the following;
var myDataObject:Object = {firstPieceOfData:list.selectedItem, secondPieceOfData:yourSecondObjectHere};
navigator.pushView(views.LoadoutView, myDataObject);