UVM- Using my own configuration files vs using config db - configuration

I wrote a sequence which can be generic to a variety of tests. I want to do it by adding configuration files for each test.
The code for the sequnce:
//----------------------------------------------------------------------
//Sequence
//----------------------------------------------------------------------
class axi_sequence extends uvm_sequence#(axi_transaction);
`uvm_object_utils(axi_sequence)
//new
function new (string name = "axi_sequence");
super.new(name);
endfunction: new
//main task
task body();
int file_p, temp, len;
byte mode;
bit [31:0] addr;
string str;
axi_transaction axi_trx;
bit [31:0] transfers [$];
bit [31:0] data;
//open file
file_p = $fopen("./sv/write_only.txt", "r"); //the name of the file should be same as the name of the test
//in case file doesn't exist
`my_fatal(file_p != 0, "FILE OPENED FAILED")
//read file
while ($feof(file_p) == 0)
begin
temp = $fgets(str, file_p);
axi_trx = axi_transaction::type_id::create(.name("axi_trx"), .contxt(get_full_name()));
// ~start_item~ and <finish_item> together will initiate operation of
// a sequence item.
start_item(axi_trx);
transfers = {};
$sscanf(str, "%c %d %h", mode, len, addr);
//assign the data to str
str = str.substr(12,str.len()-1);
//create and assign to transfers queue
if(mode == "w")
begin
for (int i = 0; i <= len; i++) begin
temp = $sscanf(str, "%h", data);
`my_fatal(temp > 0, "THE LENGHT PARAM IS WRONG- too big")
transfers. push_back(data);
str = str.substr(13+(i+1)*8,str.len()-1);
end//end for
`my_fatal($sscanf(str, "%h", temp) <= 0, "THE LENGHT PARAM IS WRONG- too small")
end//if
axi_trx.init(mode,len,addr,transfers);
if (to_random == 1) to_random should be a part of the configuration file.
trx.my_random(); //trx is transaction instance
else
trx.delay = const_config; //const_delay should be a part of the configuration file.
//contains the send_request which send the request item to the sequencer, which will forward
// it to the driver.
finish_item(axi_trx);
end//begin
endtask: body
endclass: axi_sequence
Should I do it by using different configuration file, or can I do it by values that will be passed from the test to the agent through the config db?
And how can I pass different path (for the file_p = $fopen()) for each test?

You shouldn't need a separate configuration file for each test. Ideally, you would just pass down the configuration from the test level down into the env through the config_db (or through a separate configuration object for your agent)
When you create your sequence in your test (or virtual sequencer), you should be able to set your variables as needed.

Related

Forge chunk upload .NET Core

I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171

Retrieve ActiveDirectory from within Azue SSIS IR in Datafactory

I have the script task within SSIS as outlined below. It works on premises, but once deployed to Azure SSIS IR it simply fails with no error messages.
How can I configure Azure SSIS IR differently or how can I change the script below to work within the Azure SSIS IR?
public override void CreateNewOutputRows()
{
// Specify the connnectionstring of your domain
// #<a class="vglnk" href="http://mycompany.com" rel="nofollow"><span>mycompany</span><span>.</span><span>com</span></a> => LDAP://DC=mycompany,dc=com
// Consider using a variable or parameter instead
// of this hardcoded value. On the other hand
// how many times does your domain changes
string domainConnectionString = "LDAP://[complete LDAP connection].com/OU=Staff,DC=PMLLP,DC=com";
using (DirectorySearcher ds = new DirectorySearcher(new DirectoryEntry(domainConnectionString)))
{
ds.Filter = "(&" +
"(objectClass=user)" + // Only users and not groups
")";
// See ds. for more options like PageSize.
//ds.PageSize = 1000;
// Find all persons matching your filter
using (SearchResultCollection results = ds.FindAll())
{
// Loop through all rows of the search results
foreach (SearchResult result in results)
{
// Add a new row to the buffer
Output0Buffer.AddRow();
// Fill all columns with the value from the Active Directory
Output0Buffer.employeeID = GetPropertyValue(result, "employeeID");
Output0Buffer.mail = GetPropertyValue(result, "mail");
Output0Buffer.SamAccountName = GetPropertyValue(result, "SamAccountName");
Output0Buffer.UserPrincipalName = GetPropertyValue(result, "UserPrincipalName");
}
}
}
}

ffmpeg azure function consumption plan low CPU availability for high volume requests

I am running an azure queue function on a consumption plan; my function starts an FFMpeg process and accordingly is very CPU intensive. When I run the function with less than 100 items in the queue at once it works perfectly, azure scales up and gives me plenty of servers and all of the tasks complete very quickly. My problem is once I start doing more than 300 or 400 items at once, it starts fine but after a while the CPU slowly goes from 80% utilisation to only around 10% utilisation - my functions cant finish in time with only 10% CPU. This can be seen in the image shown below.
Does anyone know why the CPU useage is going lower the more instances my function creates? Thanks in advance Cuan
edit: the function is set to only run one at a time per instance, but the problem exists when set to 2 or 3 concurrent processes per instance in the host.json
edit: the CPU drops get noticeable at 15-20 servers, and start causing failures at around 60. After that the CPU bottoms out at an average of 8-10% with individuals reaching 0-3%, and the server count seems to increase without limit (which would be more helpful if I got some CPU with the servers)
Thanks again, Cuan.
I've also added the function code to the bottom of this post in case it helps.
using System.Net;
using System;
using System.Diagnostics;
using System.ComponentModel;
public static void Run(string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed a request: {myQueueItem}");
//Basic Parameters
string ffmpegFile = #"D:\home\site\wwwroot\CommonResources\ffmpeg.exe";
string outputpath = #"D:\home\site\wwwroot\queue-ffmpeg-test\output\";
string reloutputpath = "output/";
string relinputpath = "input/";
string outputfile = "video2.mp4";
string dir = #"D:\home\site\wwwroot\queue-ffmpeg-test\";
//Special Parameters
string videoFile = "1 minute basic.mp4";
string sub = "1 minute sub.ass";
//guid tmp files
// Guid g1=Guid.NewGuid();
// Guid g2=Guid.NewGuid();
// string f1 = g1 + ".mp4";
// string f2 = g2 + ".ass";
string f1 = videoFile;
string f2 = sub;
//guid output - we will now do this at the caller level
string g3 = myQueueItem;
string outputGuid = g3+".mp4";
//get input files
//argument
string tmp = subArg(f1, f2, outputGuid );
//String.Format("-i \"" + #"input/tmp.mp4" + "\" -vf \"ass = '" + sub + "'\" \"" + reloutputpath +outputfile + "\" -y");
log.Info("ffmpeg argument is: "+tmp);
//startprocess parameters
Process process = new Process();
process.StartInfo.FileName = ffmpegFile;
process.StartInfo.Arguments = tmp;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.WorkingDirectory = dir;
//output handler
process.OutputDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("O: "+e.Data);
}
);
process.ErrorDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("E: "+e.Data);
}
);
//start process
process.Start();
log.Info("process started");
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
}
public static void getFile(string link, string fileName, string dir, string relInputPath){
using (var client = new WebClient()){
client.DownloadFile(link, dir + relInputPath+ fileName);
}
}
public static string subArg(string input1, string input2, string output1){
return String.Format("-i \"" + #"input/" +input1+ "\" -vf \"ass = '" + #"input/"+input2 + "'\" \"" + #"output/" +output1 + "\" -y");
}
When you use the D:\home directory you are writing to the virtual function, which means each instance has to continually try to write to the same spot as the functions run which causes the massive I/O block. Instead writing to D:\local and then sending the finished file somewhere else solves that issue, this way rather than each instance constantly writing to a location they only write when completed, and write to a location designed to handle high throughput.
The easiest way I could find to manage the input and output after writing to D:\local was just to hook up the function to an azure storage container and handle the ins and outs that way. Doing so made the average CPU stay at 90-100% for upwards of 70 concurrent Instances.

VC++ Store array<unsigned char>^ in MySql database

I am working on a VC++ cli application and I need to store an image into a MySql Database. I realize that this is usually not a good practice but due to the limited access to the file system this is the way I have to go.
So I have been able to get the image into a managed unsigned char array, which looks like what I have to do from the examples I have found online (which are mostly in C# and contain commands not available). I just need to figure out how to get this into the database, I am at a loss and my searches have not turned up anything useful with VC++ atleast.
Here is what I need to do, I have a managed array:
array<unsigned char>^ imageSource;
that contains the bytes of the image by using:
System::IO::FileStream^ fs;
System::IO::BinaryReader^ br;
//Read the Image
fs = gcnew System::IO::FileStream(filepath, System::IO::FileMode::Open, System::IO::FileAccess::Read);
br = gcnew System::IO::BinaryReader(fs);
//Store the Image into an array
imageSource = br->ReadBytes((int)fs->Length);
Next I need to save it into my Database:
sql::Connection *sqlConn;
sql::PreparedStatement *sqlStmt;
sql::ResultSet *sqlResult;
// Sql Connection Code
//I have a Routine to save images
//and have Select because get_last_insert_id() is not available in the c++ connector
sqlStr = "SELECT save_item_image (?, ?, ?, ?, ?) AS ID";
//Prepare the Query
sqlStmt = this->sqlConn->prepareStatement(sqlStr);
//Set Parameters in SqlStatment
sqlStmt->setInt(1, 1);
sqlStmt->setInt(2, 1);
sqlStmt->setBlob(3, &imgSource); <-- This is where I need to insert image
sqlStmt->setString(4, "jpg");
sqlStmt->setString(5, "test.jpg");
sqlStmt->executeUpdate();
As I understand it the setBlob function is requiring a std::istream to read the data. I am not sure how to go about this, this is the main hangup I have.
I finally figured this out and wanted to post an answer in case anyone else is looking for an answer to this.
The easiest way to accomplish this is by using pin_ptr to reference the first element in the managed array which is actually a pointer to the entire array. Next we cast to another pointer and finally to a char* pointer. Last in order to use this in setBlob we need to create a memory buffer for the istream.
Here is a working example:
Read the Image file to a managed array:
array<unsigned char>^ rawImageData;
System::IO::FileStream^ fs;
System::IO::BinaryReader^ br;
//Setup the filestream and binary reader
fs = gcnew System::IO::FileStream(filepath, System::IO::FileMode::Open, System::IO::FileAccess::Read);
br = gcnew System::IO::BinaryReader(fs);
//Store the Image into a byte array
rawImageData = br->ReadBytes((int)fs->Length);
Memory Buffer
#include <iostream>
#include <istream>
#include <streambuf>
#include <string>
struct membuf : std::streambuf {
membuf(char* begin, char* end) {
this->setg(begin, begin, end);
}
};
Finally the MySQL Code:
sql::Driver *sqlDriver;
sql::Connection *sqlConn;
sql::PreparedStatement *sqlStmt;
sql::ResultSet *sqlResult;
/*** SQL Connection Code Here ***/
//Build the Item Sql String to Save Images
sqlStr = "SELECT save_item_image (?, ?, ?, ?, ?) AS ID";
//Prepare the Query
sqlStmt = sqlConn->prepareStatement(sqlStr);
//Create a pin_ptr to the first element in the managed array
pin_ptr<unsigned char> p = &rawImageData[0];
//Get a char pointer to use in the memory buffer
unsigned char* pby = p;
char* pch = reinterpret_cast<char*>(pby);
//Memory Buffer, note the use of length from the RawImageData, images contain
//NULL char's so size_of doesn't return the right length.
membuf sbuf(pch, pch + rawImageData->Length);
//Create the istream to use in the setBlob
std::istream sb(&sbuf, std::ios::binary | std::ios::out);
//Finally save everything into the database
sqlStmt->setInt(1, 1);
sqlStmt->setInt(2, 1);
sqlStmt->setBlob(3, &sb); //*** Insert the Image ***/
sqlStmt->setString(4, "Name");
sqlStmt->setString(5, "Path");
sqlStmt->executeUpdate();
sqlResult = sqlStmt->getResultSet();
//Make sure everything was executed ok
if (sqlResult->rowsCount() > 0) {
sqlResult->next();
int image_id = sqlResult->getInt("ID");
}
Note: I used a SELECT in the sql string because save_item_image is actually a function that returns the id of the inserted image. I needed to use this method because there is no other way to get the last inserted ID (that I could find) in the C++ Connector. In other MySql libraries there is a last_insert_id() command but not in the connector.

F#: DataContractJsonSerializer.WriteObject method

I am new to programming and F# is my first language.
Here are the relevant parts of my code:
let internal saveJsonToFile<'t> (someObject:'t) (filePath: string) =
use fileStream = new FileStream(filePath, FileMode.OpenOrCreate)
(new DataContractJsonSerializer(typeof<'t>)).WriteObject(fileStream, someObject)
let dummyFighter1 = { id = 1; name = "Dummy1"; location = "Here"; nationality = "Somalia"; heightInMeters = 2.0; weightInKgs = 220.0; weightClass = "Too fat"}
let dummyFighter2 = { id = 2; name = "Dummy2"; location = "There"; nationality = "Afghanistan"; heightInMeters = 1.8; weightInKgs = 80.0; weightClass = "Just Nice"}
let filePath = #"G:\User\Fighters.json"
saveJsonToFile dummyFighter1 filePath
saveJsonToFile dummyFighter2 filePath
When I run "saveJsonToFile dummyFighter1 filePath", the information is successfully saved. My problem is this: Once I run "saveJsonToFile dummyFighter2 filePath", it immediately replaces all the contents that are already in the file, i.e., all the information about dummyFighter1.
What changes should I make so that information about dummyFighter2 is appended to the file, instead of replacing information about dummyFighter1?
Change the way you open a file setting FileMode.OpenOrCreate to FileMode.Append. Append means "create or append" :
use fileStream = new FileStream(filePath, FileMode.Append)
From MSDN (https://msdn.microsoft.com/fr-fr/library/system.io.filemode%28v=vs.110%29.aspx) :
FileMode.Append opens the file if it exists and seeks to the end of the file, or
creates a new file. This requires FileIOPermissionAccess.Append
permission. FileMode.Append can be used only in conjunction with
FileAccess.Write. Trying to seek to a position before the end of the
file throws an IOException exception, and any attempt to read fails
and throws a NotSupportedException exception.