Load public key from unsigned char array - public-key

I have a public key as an array of bytes from xxd:
unsigned char publicKey_txt[] = {
0x30, 0x82, 0x02, 0x22, 0x30, 0x0d, 0x06, 0x09, 0x2a, 0x86, 0x48, 0x86, .. };
From previous Stack Overflow questions I have a general understanding that in my case a StringSink followed by load should work
StringSource publicstring(publicKey_txt, true, NULL);
publicKey.Load(publicstring);
Simply loading from text file works, but when I load from StringSink I get an error:
Error: BER decode error
How do I load a public key from unsigned char array?

Found my answer from #jww - Load RSA PKCS#1 private key from memory?
In my case slight modification instead of SinkSource I use the ArraySource and publicKey_txt_len is size of char array publicKey_txt.
CryptoPP::ArraySource as( publicKey_txt, publicKey_txt_len, true);
publicKey.Load(as);

Related

What should I set the flags field of CUDA_BATCH_MEM_OP_NODE_PARAMS?

The CUDA graph API exposes a function call for adding a "batch memory operations" node to a graph:
CUresult cuGraphAddBatchMemOpNode (
CUgraphNode* phGraphNode,
CUgraph hGraph,
const CUgraphNode* dependencies,
size_t numDependencies,
const CUDA_BATCH_MEM_OP_NODE_PARAMS* nodeParams
);
but the documentation for this API call does not explain what the flags field of ... is used for, and what one should set the flags to. So what value should I be passing?
A related API function is cuStreamBatchMemOp
CUresult cuStreamBatchMemOp (
CUstream stream,
unsigned int count,
CUstreamBatchMemOpParams* paramArray,
unsigned int flags
);
it essentially takes the fields of CUDA_BATCH_MEM_OP_NODE_PARAMS as its separate parameters. Its documentation says that flags is "reserved for future expansion; must be 0".

C MySQL Types Error

I'm trying to store results taken from a MySQL query into an array of structs. I can't seem to get the types to work though, and I've found the MySQL documentation difficult to sort through.
My struct is:
struct login_session
{
char* user[10];
time_t time;
int length;
};
And the loop where I'm trying to get the data is:
while ( (row = mysql_fetch_row(res)) != NULL ) {
strcpy(records[cnt].user, &row[0]);
cnt++;
}
No matter what I try though I constantly get the error:
test.c:45: warning: passing argument 1 of ‘strcpy’ from incompatible pointer type
/usr/include/string.h:128: note: expected ‘char * __restrict__’ but argument is of type ‘char **’
test.c:45: warning: passing argument 2 of ‘strcpy’ from incompatible pointer type
/usr/include/string.h:128: note: expected ‘const char * __restrict__’ but argument is of type ‘MYSQL_ROW’
Any pointers?
Multiple problems, all related to pointers and arrays, I recommend you do some reading.
First, char * user[10] is defining an array of 10 char * values, not an array of char, which is was I suspect you want. The warning even says as much, strcpy() expects a char *, the user field on its own is seen as a char **.
Second, you're one & away from what you want in the second argument.
Copied from mysql.h header:
typedef char **MYSQL_ROW; /* return data as array of strings */
A MYSQL_ROW is an array of char arrays. Using [] does a dereference, so you dereference down to a char * which is what strcpy() takes, but then you take the address of it using &.
Your code should look more like this:
struct login_session
{
char user[10];
time_t time;
int length;
};
while ( (row = mysql_fetch_row(res)) != NULL ) {
strcpy(records[cnt].user, row[0]);
cnt++;
}
I don't know what guarantees you have about the data coming from mysql, but if you can't be absolutely sure that the rows are <= 10 characters long and null ('\0') terminated, you should use strncpy() to avoid any possibility of overflowing the user array.

Video decoder on Cuda ffmpeg

I starting to implement custum video decoder that utilize cuda HW decoder to generate YUV frame for next to encode it.
How can I fill "CUVIDPICPARAMS" struc ???
Is it possible?
My algorithm are:
For get video stream packet I'm use ffmpeg-dev libs avcodec, avformat...
My steps:
1) Open input file:
avformat_open_input(&ff_formatContext,in_filename,nullptr,nullptr);
2) Get video stream property's:
avformat_find_stream_info(ff_formatContext,nullptr);
3) Get video stream:
ff_video_stream=ff_formatContext->streams[i];
4) Get CUDA device and init it:
cuDeviceGet(&cu_device,0);
CUcontext cu_vid_ctx;
5) Init video CUDA decoder and set create params:
CUVIDDECODECREATEINFO *cu_decoder_info=new CUVIDDECODECREATEINFO;
memset(cu_decoder_info,0,sizeof(CUVIDDECODECREATEINFO));
...
cuvidCreateDecoder(cu_video_decoder,cu_decoder_info);
6)Read frame data to AVpacket
av_read_frame(ff_formatContext,ff_packet);
AND NOW I NEED decode frame packet on CUDA video decoder, in theoretical are:
cuvidDecodePicture(pDecoder,&picParams);
BUT before I need fill CUVIDPICPARAMS
CUVIDPICPARAMS picParams;//=new CUVIDPICPARAMS;
memset(&picParams, 0, sizeof(CUVIDPICPARAMS));
HOW CAN I FILL "CUVIDPICPARAMS" struc ???
typedef struct _CUVIDPICPARAMS
{
int PicWidthInMbs; // Coded Frame Size
int FrameHeightInMbs; // Coded Frame Height
int CurrPicIdx; // Output index of the current picture
int field_pic_flag; // 0=frame picture, 1=field picture
int bottom_field_flag; // 0=top field, 1=bottom field (ignored if field_pic_flag=0)
int second_field; // Second field of a complementary field pair
// Bitstream data
unsigned int nBitstreamDataLen; // Number of bytes in bitstream data buffer
const unsigned char *pBitstreamData; // Ptr to bitstream data for this picture (slice-layer)
unsigned int nNumSlices; // Number of slices in this picture
const unsigned int *pSliceDataOffsets; // nNumSlices entries, contains offset of each slice within the bitstream data buffer
int ref_pic_flag; // This picture is a reference picture
int intra_pic_flag; // This picture is entirely intra coded
unsigned int Reserved[30]; // Reserved for future use
// Codec-specific data
union {
CUVIDMPEG2PICPARAMS mpeg2; // Also used for MPEG-1
CUVIDH264PICPARAMS h264;
CUVIDVC1PICPARAMS vc1;
CUVIDMPEG4PICPARAMS mpeg4;
CUVIDJPEGPICPARAMS jpeg;
unsigned int CodecReserved[1024];
} CodecSpecific;
} CUVIDPICPARAMS;
typedef struct _CUVIDH264PICPARAMS
{
// SPS
int log2_max_frame_num_minus4;
int pic_order_cnt_type;
int log2_max_pic_order_cnt_lsb_minus4;
int delta_pic_order_always_zero_flag;
int frame_mbs_only_flag;
int direct_8x8_inference_flag;
int num_ref_frames; // NOTE: shall meet level 4.1 restrictions
unsigned char residual_colour_transform_flag;
unsigned char bit_depth_luma_minus8; // Must be 0 (only 8-bit supported)
unsigned char bit_depth_chroma_minus8; // Must be 0 (only 8-bit supported)
unsigned char qpprime_y_zero_transform_bypass_flag;
// PPS
int entropy_coding_mode_flag;
int pic_order_present_flag;
int num_ref_idx_l0_active_minus1;
int num_ref_idx_l1_active_minus1;
int weighted_pred_flag;
int weighted_bipred_idc;
int pic_init_qp_minus26;
int deblocking_filter_control_present_flag;
int redundant_pic_cnt_present_flag;
int transform_8x8_mode_flag;
int MbaffFrameFlag;
int constrained_intra_pred_flag;
int chroma_qp_index_offset;
int second_chroma_qp_index_offset;
int ref_pic_flag;
int frame_num;
int CurrFieldOrderCnt[2];
// DPB
CUVIDH264DPBENTRY dpb[16]; // List of reference frames within the DPB
// Quantization Matrices (raster-order)
unsigned char WeightScale4x4[6][16];
unsigned char WeightScale8x8[2][64];
// FMO/ASO
unsigned char fmo_aso_enable;
unsigned char num_slice_groups_minus1;
unsigned char slice_group_map_type;
signed char pic_init_qs_minus26;
unsigned int slice_group_change_rate_minus1;
union
{
unsigned long long slice_group_map_addr;
const unsigned char *pMb2SliceGroupMap;
} fmo;
unsigned int Reserved[12];
// SVC/MVC
union
{
CUVIDH264MVCEXT mvcext;
CUVIDH264SVCEXT svcext;
};
} CUVIDH264PICPARAMS;
This is the purpose of the CUvideoparser object. You feed it the data stream frame by frame through cuvidParseVideoData, and it calls you back with CUVIDPICPARAMS ready to pass to the decoder when it detects it has a complete frame ready.
All this and more is very well illustrated in the D3D9 decode sample, available here. I suggest studying it in detail because there's not much documentation for this API outside of it.

Entity Framework 5 + MySQL.Data 6.6.4.0 causes NullReferenceException on data insert

I am using Entity Framework with MySQL and it's giving me a NullReferenceException every time I try to insert data.
I can insert data directly by creating a command but when I use Entity Framework it bombs out.
Entity Framework will select from tables or update tables so perhaps this is something to do with the primary key
The following exception is thrown from the SaveChanges() method.
failed: System.NullReferenceException : Object reference not set to an instance of an object.
at MySql.Data.Entity.ListFragment.WriteSql(StringBuilder sql)
at MySql.Data.Entity.SelectStatement.WriteSql(StringBuilder sql)
at MySql.Data.Entity.InsertStatement.WriteSql(StringBuilder sql)
at MySql.Data.Entity.SqlFragment.ToString()
at MySql.Data.Entity.InsertGenerator.GenerateSQL(DbCommandTree tree)
at MySql.Data.MySqlClient.MySqlProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree)
at System.Data.Common.DbProviderServices.CreateCommandDefinition(DbCommandTree commandTree)
at System.Data.Common.DbProviderServices.CreateCommand(DbCommandTree commandTree)
at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree)
at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.CreateCommand(UpdateTranslator translator, Dictionary`2 identifierValues)
at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues)
at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter)
at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache)
at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options)
at System.Data.Objects.ObjectContext.SaveChanges()
EF 5
public decimal CreateAlertNotification2(ulong alertServiceId, Alerting.alert_service_notification_type notificationType, string recipientName, string recipientEndpoint)
{
using (risk_fleetEntities dbContext = new risk_fleetEntities())
{
var newNotification = new alert_service_notification();
newNotification.sys_alert_service_id = alertServiceId;
newNotification.name = recipientName;
newNotification.notification_type = Enum.GetName(typeof(Alerting.alert_service_notification_type), notificationType);
newNotification.recipient = recipientEndpoint;
dbContext.alert_service_notification.AddObject(newNotification);
dbContext.SaveChanges();
return newNotification.id;
}
}
MySQL 5
CREATE TABLE `alert_service_notification` (
`id` bigint(20) unsigned AUTO_INCREMENT PRIMARY KEY,
`sys_alert_service_id` bigint(20) unsigned NOT NULL,
`notification_type` ENUM("SMS", "Email"),
`name` CHAR(50),
`recipient` CHAR(50),
FOREIGN KEY (`sys_alert_service_id`) REFERENCES `alert_service`(`id`)
);
The EF doesn't support unsigned ints, so in this case it has to store the bigint as decimal as Int64's not big enough (9,223,372,036,854,775,807 max value vs unsigned bigint's 18,446,744,073,709,551,615). It will also store an unsigned int as Int64 instead of Int32 for the same reason.
If you try and change the type in the EF designer it doesn't quite do everything correctly and throws errors like this at runtime. You can actually open the edmx and edit it to fix this, you trick the EF into assuming the column isn't really unsigned. Simply find all references of your column and make sure none of them mention Int64/decimal (depending if you want an Int32/Int64). However if you are doing this, your database could possibly return invalid values if you ever went over the max value.
So the easier and correct fix is not to use unsigned or change the existing columns from unsigned. Certain MySQL designer tools default to unsigned id columns, so watch out for that!
My solution was very simple. Just edited the database to have no "unsigned" variables, and built the model from that. Works like a charm, and no manual editing needed. Also, changing them back to unsigned after generating the model seems to have no issues. :)

How to display date using C++?

How can I display the date using the function "MessageBox"?
Here is a link for several different ways to get the date and time:
Date & Time
Copied from site above:
Definition (from windows):
typedef struct _SYSTEMTIME {
WORD wYear;
WORD wMonth;
WORD wDayOfWeek;
WORD wDay;
WORD wHour;
WORD wMinute;
WORD wSecond;
WORD wMilliseconds;
} SYSTEMTIME, *PSYSTEMTIME, *LPSYSTEMTIME;
Implementation:
SYSTEMTIME st;
GetSystemTime(&st);
// You format how you want
DateTime dateTime = DateTime::Now;
MessageBox::Show(dateTime.ToString());
Other ToXString() functions can be found here
For example like this (I assumed you asked about native Windows API):
// Get current time
SYSTEMTIME now;
GetLocalTime(&now);
// Format the date using the default user language
TCHAR buffer[1024];
GetDateFormat(
MAKELCID(LANG_USER_DEFAULT, SORT_DEFAULT),
0,
&now,
NULL,
buffer,
1024
);
// Show it in a message box
MessageBox(HWND_DESKTOP, buffer, _T("Today"), MB_OK);
It's also possible to ask GetDateFormat to calculate the buffer length required to store the output. To do that pass NULL and 0 as last two parameters:
int length = GetDateFormat(
MAKELCID(LANG_USER_DEFAULT, SORT_DEFAULT),
0,
&now,
NULL,
NULL,
0
);