Is it possible to pass wait time or sleep time though Json data.
For example, this is my Json data:
{
"Departuremonth":"5",
"Creditcard":"4012000077777777",
"Firstname":"test",
"Lastname":"user",
"Phone":"8111231311"
}
which i will be fetching in my protractor codes. Now in some places in protractor, I have used sleep to wait for the element.
browser.sleep(3000); //sleep for 3 seconds
So, that sleep time I am writing as a code in protractor. I want to call that sleep or wait from the json data only.
Can anyone suggest something on this?
Yes, it is possible to pass any kind of data from json file. It's quite easy too. Follow below code:
1->Create json file ex-testData.json with following inputs
testData.json:
{ "shortWait":"5000",
"mediumWait":"12000",
"longWait":"20000"
}
2->Import the testData.json file on sepc.js/methods.js file where you would like to use it.
var input=require("../testData.json")//make sure testData.json file path is correct
3->Now you can read the respective wait value from testData.json file using object.name syntax like "input.shortWait" and pass this value to wait/sleep methods as follows:
it('example to pass wait time from json file:'function(){
//line of codes
utility.wait(input.shorWait);//make sure that wait method available in
//utility file
browser.sleep(input.longWait)
});
Related
How can I create an object f var ofile = new IloOplOutputFile("Resultat.txt"); ; and call the object on my post process each time the model will be solved , my purpose is to create an object " ofile " one time and call it, at each time my model will be solved and display the results in a file, I don't wanna do this on the main bloc because I have a lot of parameters.
my model is an iterative one so it solves for different data and I am trying to output at each time the results .
At this time it returns me only the last the iteration because at each time I am calling the postprocess it's creats a new file and crush the last results ...
Another solution would be to copy the result that display Cplex/Opl on its script box directly on my file , but i dont know how to do it with the language of Cplex/OPL ..
Regards thanks !
Do not hesitate to use the append parameter:
IloOplOutputFile(path, append)
Parameters:
path - Optional: The path of the file to open.
append - Optional: If true, sets the stream position at the end of the file.
PS: same question at https://www.ibm.com/developerworks/community/forums/html/topic?id=575928e1-eb6e-4468-9a10-46c6fe8fb73a&ps=25
So I have seen a lot of topics on FFMPeg and it's a great tool I learnt about today, but I have spent the day perfecting the command and now am a little stuck with the NodeJS part.
In essence the command does the following: take input from a Mac OSX webcam, and then stream it to a web-socket. Now I looked at a lot of the NodeJS libraries but I couldn't find one that did what I need; or did not understand how to. Here is an example of the command that I am using:
ffmpeg -f avfoundation -framerate 30 -video_size 640x480 -pix_fmt uyvy422 -i "0:1" -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 http://localhost:8081/stream
This does everything I need for the streaming side of things, but I wish to call it via NodeJS, and then be able to monitor the log, and parse the data that comes back for example:
frame= 4852 fps= 30 q=6.8 size= 30506kB time=00:02:41.74 bitrate=1545.1kbits/s speed= 1x \r
and use it to get a JSON array back for me to output to a webpage.
Now all I am doing is working on ways of actually parsing the data, and I have looked at lots of other answers for things like this, but I can't seem to split/replace/regex it. I can't get anything but a long string from it.
Here is the code I am using (NodeJS):
var ffmpeg = require('child_process').spawn('/usr/local/Cellar/ffmpeg/3.3.1/bin/ffmpeg', ['-f', 'avfoundation', '-framerate', '30', '-video_size', '640x480', '-pix_fmt', 'uyvy422', '-i', '0:1', '-f', 'mpegts', '-codec:v', 'mpeg1video', '-s', '640x480', '-b:v', '1000k', '-bf', '0', 'http://localhost:8081/test']);
ffmpeg.on('error', function (err) {
console.log(err);
});
ffmpeg.on('close', function (code) {
console.log('ffmpeg exited with code ' + code);
});
ffmpeg.stderr.on('data', function (data) {
// console.log('stderr: ' + data);
var tData = data.toString('utf8');
// var a = tData.split('[\\s\\xA0]+');
var a = tData.split('\n');
console.log(a);
});
ffmpeg.stdout.on('data', function (data) {
var frame = new Buffer(data).toString('base64');
// console.log(frame);
});
I have tried splitting with new lines, carridge return, spaces, tabs, but I just can't seem to get a basic array of bits, that I can work with.
Another thing to note, is you will notice the log comes back via stderr, I have seen this online and apparently it does it for a lot of people? So I am not sure what the deal is with that? but the code is is the sdterr callback.
Any help is very appreciated as I am truly confused on what I am doing wrong.
Thanks.
An update on this, I worked with one of the guys off the IRC channel: #ffmpeg on FreeNode. The answer was to send the output via pipe to stdout.
For example I appended the following to the FFMpeg command:
-progress pipe:1
The progress flag is used to give an output every second with information about the stream, so this is pretty much everything you get from the stderr stream every second, but piped to the stdout stream in a format that I can parse. Below is taken from the documentation.
-progress url (global) Send program-friendly progress information to url. Progress information is written approximately every second and at the end of the encoding process. It is made of "key=value" lines. key consists of only alphanumeric characters. The last key of a sequence of progress information is always "progress".
Here is an example of the code I used to parse the stream information:
ffmpeg.stdout.on('data', function (data) {
var tLines = data.toString().split('\n');
var progress = {};
for (var i = 0; i < tLines.length; i++) {
var item = tLines[i].split('=');
if (typeof item[0] != 'undefined' && typeof item[1] != 'undefined') {
progress[item[0]] = item[1];
}
}
// The 'progress' variable contains a key value array of the data
console.log(progress);
});
Thanks to all that commented!
In the spirit of not reinventing the wheel, you might want to try using fluent-ffmpeg. It dispatches a progress event with a number of useful fields
'progress': transcoding progress information
The progress event is emitted every time ffmpeg reports progress
information. It is emitted with an object argument with the following
keys:
frames: total processed frame count
currentFps: framerate at which FFmpeg is currently processing
currentKbps: throughput at which FFmpeg is currently processing
targetSize: current size of the target file in kilobytes
timemark: the timestamp of the current frame in seconds
percent: an estimation of the progress percentage
If you're curious about how they do this, you can read the source, starting from here and here
Ffmpeg uses stderr to output log info because stdout is used for piping the output to other processes. The stuff in stderr is actually just debug information, and not the actual output of the process.
BONUS ROUND
I've seen some hacky video players that use websockets to stream videos, but that approach has a number of issues with it. I'm not going to go over those, but I will explain why I think you should use hls.js.
Support is pretty good; basically works everywhere except old IE. It uses MSE to upgrade the standard video element, so you don't have to wrestle with building a custom player.
Here are the docs for the hls format flag
Here's some code that I'm using to stream from an IPTV box to a web page.
this.ffmpeg = new FFFmpeg()
this.ffmpeg.input(request(this.http_stream))
.videoCodec('copy')
.audioCodec('copy')
.outputOptions([
'-f hls',
'-hls_list_size 6',
'-hls_flags delete_segments'
])
.output( path.join(this.out_dir, 'video.m3u8') )
.run()
It generates a .m3u8 manifest file along with segmented mpeg-ts video files. All you need to do after that is load the m3u8 file into the hls.js player and you have a live stream!
If you're going to re-encode the stream, you will probably see some low fps and glitchiness. I'm lucky since my source stream is already encoded as mpeg-ts.
I've been trying to debug for hours a Firebase rule problem and was wondering if there is something easier available.
My problem is that I save my firebaseObject with $save (or create with $add) and get a permission denied because of my rules. However, both the rules and the object is pretty complex and there are dozens of rules which are involved. In my simulator, I think I got it all, but still get permission denied.
The problem is that I am not 100% sure how the JSON data actually looks which $save tries to send to Firebase. If I use the normal console.log(myObject), I get of course a list of all values and functions inside this object, but this isn't the same as the raw JSON I would expect (like { "name": "value" }).
Is there any way to display the actual plain JSON data $save sends to copy this into the rule simulator and debug? Or is there any other way to see which exact permission is denied?
Otherwise, I have to go one by one, switching my permissions off and on which would be a pretty long night for me. :(
If the value of the $firebaseObject is an object, the only difference (in addition to the prototype-wired methods) should be a number of $-prefixed properties (like $id and $resolved). So you should be able to see the actual JSON of what will be written to the database using something like this:
var written = {};
Object.keys(myObject).forEach(function (key) {
if (key.charAt(0) !== "$") { written[key] = myObject[key]; }
});
console.log(JSON.stringify(written));
The $$hashKey entries mentioned in your comment are added by AngularJS. A more general mechanism could be used to remove/ignore all $-prefixed keys throughout the object:
console.log(JSON.stringify(myObject, function (key, val) {
return key.charAt(0) === "$" ? undefined : val;
}));
Please help me with following issue:
I have a simple Jmeter test with where variables are stored in CSV file. There is only one request in the test:
Get .../api/${page} , where ${page} is a variable from CSV
Everything goes well with thread properties for ex. 10 threads x30 loop count
If i increase any parameter, for ex. in 10x40 or 15x30, i receive at least one error and looks like this is jmeter issue:
one request (random) isn't able to take variable from CSV and i got an error:
-.../api/page returns 404 error
So the question is - is there any limit in jmeter's connection to CSV file?
Thanks in advance.
A key point to focus on is the way your application manage the case when 2 different users require the same page.
There are few checks that I would recommend:
be sure that the "Recycle on EOF" property is true
be sure that you have more lines on CSV than the number of threads you are firing
use a "View result tree" controller to investigate the kind of error you are getting
Let us know
I am new to CouchDB. I need to get 60 or more JSON files in a minute from a server.
I have to upload these JSON files to CouchDB individually as soon as I receive them.
I installed CouchDB on my Linux machine.
I hope some one can help me with my requirement.
If possible can someone help me with pseudo code.
My Idea:
Is to write a python script for uploading all JSON files to CouchDB.
Each and every JSON file must be each document and the data present in
JSON must be inserted same into CouchDB
(the specified format with values in a file).
Note:
These JSON files are Transactional, every second 1 file is generated
so I need to read the file upload as same format into CouchDB on
successful uploading archive the file into local system of different folder.
python program to parse the json and insert into CouchDb:
import sys
import glob
import errno,time,os
import couchdb,simplejson
import json
from pprint import pprint
couch = couchdb.Server() # Assuming localhost:5984
#couch.resource.credentials = (USERNAME, PASSWORD)
# If your CouchDB server is running elsewhere, set it up like this:
couch = couchdb.Server('http://localhost:5984/')
db = couch['mydb']
path = 'C:/Users/Desktop/CouchDB_Python/Json_files/*.json'
#dirPath = 'C:/Users/VijayKumar/Desktop/CouchDB_Python'
files = glob.glob(path)
for file1 in files:
#dirs = os.listdir( dirPath )
file2 = glob.glob(file1)
for name in file2: # 'file' is a builtin type, 'name' is a less-ambiguous variable name.
try:
with open(name) as f: # No need to specify 'r': this is the default.
#sys.stdout.write(f.read())
json_data=f
data = json.load(json_data)
db.save(data)
pprint(data)
json_data.close()
#time.sleep(2)
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
I would use CouchDB bulk API, even though you have specified that you need to send them to db one by one. For example, by implementing a simple queue that gets sent out every say 5 - 10 seconds via a bulk doc call will greatly increase performance of your application.
There is obviously a quirk in that and that is you need to know the IDs of the docs that you want to get from the DB. But for the PUTs it is perfect. (it is not entirely true, you can get ranges of docs using bulk operation if the IDs you are using for your docs can be sorted nicely).
From my experience working with CouchDB, I have a hunch that you are dealing with Transactional documents in order to compile them into some sort of sum result and act on that data accordingly (maybe creating next transactional doc in series). For that you can rely on CouchDB by using 'reduce' functions on the views you create. It takes a little practice to get reduce function working properly and is highly dependent on what it is you actually what to achieve and what data you are prepared to emit by the view so I can't really provide you with more detail on that.
So in the end the app logic would go something like that:
get _design/someDesign/_view/yourReducedView
calculate new transaction
add transaction to queue
onTimeout
send all in transaction queue
If I got that first part of why you are using transactional docs wrong all that would really change is the part where you getting those transactional docs in my app logic.
Also, before writing your own 'reduce' function, have a look at buil-in ones (they are alot faster then anything outside of db engine can do)
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
EDIT:
Since you are starting, I strongly recommend to have a look at CouchDB Definitive Guide.
NOTE FOR LATER:
Here is one hidden stone (well maybe not so much a hidden stone but not an obvious thing to look out for for the new-comer in any case). When you write reduce function make sure that it does not produce too much output for the query without boundaries. This will extremely slow down the entire view even when you provide reduce=false when getting stuff from it.
So you need to get JSON documents from a server and send them to CouchDB as you receive them. A Python script would work fine. Here is some pseudo-code:
loop (until no more docs)
get new JSON doc from server
send JSON doc to CouchDB
end loop
In Python, you could use requests to send the documents to CouchDB and probably to get the documents from the server as well (if it is using an HTTP API).
You might want to checkout the pycouchdb module for python3. I've used it myself to upload lots of JSON objects into couchdb instance. My project does pretty much the same as you describe so you can take a look at my project Pyro at Github for details.
My class looks like that:
class MyCouch:
""" COMMUNICATES WITH COUCHDB SERVER """
def __init__(self, server, port, user, password, database):
# ESTABLISHING CONNECTION
self.server = pycouchdb.Server("http://" + user + ":" + password + "#" + server + ":" + port + "/")
self.db = self.server.database(database)
def check_doc_rev(self, doc_id):
# CHECKS REVISION OF SUPPLIED DOCUMENT
try:
rev = self.db.get(doc_id)
return rev["_rev"]
except Exception as inst:
return -1
def update(self, all_computers):
# UPDATES DATABASE WITH JSON STRING
try:
result = self.db.save_bulk( all_computers, transaction=False )
sys.stdout.write( " Updating database" )
sys.stdout.flush()
return result
except Exception as ex:
sys.stdout.write( "Updating database" )
sys.stdout.write( "Exception: " )
print( ex )
sys.stdout.flush()
return None
Let me know in case of any questions - I will be more than glad to help if you will find some of my code usable.