What data should an access unit contain?
[SPS][PPS][IDR][PFrame][BFrame]
is an access unit?
I got the concept that, access unit delimiters are required in ts packet. But, if the source file(I am using .264 file as input) do not contain AUD? then should I add AUD explicitly? If yes, then of how many bytes?
Should it be the following?
0x00000169(of Header+1 Byte)
And where should I add the AUD?
Currently my TS file contains this formation:
[TS Header][PES Header][SPS][PPS][IDR][PFrame][BFrame][TS Header]...
Is this structure okay? Also, can one access unit contain multiple I frames?
f the source file does not contain AUD should I add AUD explicitly?
YES
Also, can one access unit contain multiple I frames
No, an an access unit is analogous to a single frame, they are used to explesit delimit frames. Hence the name.
0x00000169(of Header+1 Byte)
0x00000169FF80 just use this for the AUD. You can hard code it.
Related
I am using ELKI to cluster data from CSV file
I use
-resulthandler ResultWriter
-out folder/
to save the outputdata
But as an output I have some strange indexes
ID=2138 0.1799 0.2761
ID=2137 0.1797 0.2778
ID=2136 0.1796 0.2787
ID=2109 0.1161 0.2072
ID=2007 0.1139 0.2047
The ID is more than 2000 despite I have less than 100 training samples
DBIDs are internal; the documentation clearly says that you shouldn't make too much assumptions on them because their implementation may change. The only reason they are written to the output at all is because some methods (such as OPTICS) may require cross-referencing objects by this unique ID.
Because they are meant to be unique identifiers, they are usually continuously incremented. The next time you click on "run" in the MiniGUI, you will get the next n IDs... so clearly, you clicked run more than once.
The "Tips & Tricks" in the ELKI DBID documentation probably answer your underlying question - how to use map DBIDs to line numbers of your input file. The best way is to if you want to have object identifiers, assign object identifiers yourself by using an identifier column (and configuring it to be an external identifier).
For further information, see the documentation: https://elki-project.github.io/dev/dbids
I am running Access 2010. I need to read in a TXT file into a string. Each line can be anywhere from 40 to 320 char long, ending in a CR. The biggest problem is the TXT file of various lines contains comma's (,) and quotations (") as part of the data.
Is there a trick to doing this? Even if it is getting each char, and testing to see if it is a CR....
To accomplish this task, you will need to write your own import code that will read directly from the file. The Microsoft Access import features will not handle a file like this very well, and since you want to analyze each line in code, it is better to handle reading it yourself.
There are many approaches you can take, and all will involve File handles and Opening the file. But, the best approach is to use a class that does all of the dirty work for you.
One such class is the LargeTextFile class that can be found in any of the Microsoft Access Developer's Handbooks (Volume 1) for Access 97, 2000, 2002 or 2003, written by Getz, Litwin, and Gilbert (Sybex), if you have access to one of them.
Another option would be the clsReadTextFile class, available for free on the Access MVP Site (The Access Web) site:
http://www.theaccessweb.com/downloads/clsReadTextFile.txt
Using clsReadTextFile you can process your file, line by line using code similar to this:
Dim file As New clsReadTextFile
Dim line As String
file.FileName = "C:\MyFile.txt"
file.cfOpenFile
Do While Not file.EndOfFile
file.csGetALine
line = file.Text
If InStr(line, "MySearchText") Then
'Do something
End If
Loop
file.cfCloseFile
The line string variable will contain the text of the line just read, and you can write code to parse it how you need and process it appropriately. Then the loop will go on to read the next line. This will allow you to process each line of the file manually in your code.
It is not clear from your post as to whether or not you can - or have tried - to use the tools available in the product for this task. Access 2010 offers linking to a .txt file as well as appending a .txt file to a table. These are standard features in the External tab of the ribbon.
The Large Text (formerly Memo) field type allows ~4K characters. Not sure if you wish to attempt to bring in all the txt data into a single field - if so then this limit is important.
If the CRs of the text document imply a new record/row of data - rather than a continuous string for the entire document - - AND - - - if there is any consistent structure within all rows of data - then the import wizard can use either character count or symbols (i.e. comma if they exist) - as the means to separate/segregate each individual row of data into separate fields in a single row of a table.
I'm currently "hacking" an old 3d Printer, built in 1996. There is Software running on an old Windows PC. I need to modify some parameters which are not accessible from the front end, so I wanted to modify the config files. But if I modify something, it could not be read anymore. I noticed, that there is a checksum at the end of the file, and I'm not really an checksum expert. I assume that, while loading the file, this checksum is calculated again and compared to the one at the end.
I'm having trouble finding out which checksum algorithm is used.
What I already found out: I think it's not just an addition of the bits in the file. When I'm switching two characters, an checksum, that is generated with addition, would not change. But the software won't take that file.
I'm guessing its some kind of CRC16, because a checksum looks like that:
0x4f20
As I have calculated that number with several usual CRC16 parameters and could not find a match with the "4f20", I assume that it must be an custom CRC16..
Here is a complete sample file:
PACKET noname
style 502
last_modified 1511855084 # Tue Nov 28 08:44:44 2017
STRUCTURE MACHINE_OVRL
PARAM distance_units
Value = "millimeters"
ENDPARAM
PARAM language
Value = "English"
ENDPARAM
ENDSTRUCTURE
ENDPACKET
checksum 0x4f20
I think either the checksum itself or the complete line "checksum 0x4f20" is not being considered while calculated, because thats not possible (?)
Any help is appreciated.
Edit: I got some more files with checksums of course, but these are a lot longer than this file. If needed, I could provide them too..
RevEng was written for this purpose. Given several examples of the input and the associated CRCs, RevEng will derive the CRC parameters. If it is a CRC.
I have a fixed length flat file input file. The records look like this
40000003858172870114823 0010087192017092762756014202METFORMIN HCL ER 500 MG 0000001200000300900000093E00000009E00000000{0000001{00000104{JOHN DOE 196907161423171289 2174558M2A2 000 xxxx YYYYY 100000000000 000020170915001 00010000300 000003zzzzzz 000{000000000{000000894{ aaaaaaaaaaaaaaa P2017092700000000{00000000{00000000{00000000{ 0000000{00000{ F89863 682004R0900001011B2017101109656 500 MG 2017010100000000{88044828665760
If you look just before the JOHN DOE you will see a field that represents a money field. It looks like 00000104{.
This looks like the type of field I used to process from a mainframe many years ago. How do I handle this in SSIS. If the { on the end is in fact a 0, then I want the field to be a string that reads 0000010.40.
I have other money fields that are, e.g. 00000159E. If my memory serves me correctly, that would be 00000015.95.
I can't find anything on how to do this transform.
Thanks,
Dick Rosenberg
import the values as strings
00000159E
00000104{
in derived column do your transforms with replace:
replace(replace(col,"E","5"),"{","0")
in another derived column cast to money and divide by 100
(DT_CY)(drvCol) / 100
I think you will need to either use a Script Component source in the data flow, or use a Derived Column transformation or Script Component transformation. I'd recommend a Script Component either way as it sounds like your custom logic will be fairly complex.
I have written a few detailed answers about how to implement a Script component source:
SSIS import a Flat File to SQL with the first row as header and last row as a total
How can I load in a pipe (|) delimited text file that has columns that sometimes contain line breaks?
Essentially, you need to locate the string, "00000104{", for example, and then convert it into decimal/money form before adding it into the data flow (or during it if you're using a Derived Column Transformation).
This could also be done in a Script Component transformation, which would function in a similar way to the Derived Column transformation, only you'd perhaps have a bit more scope for complex logic. Also in a Script Component transformation (as opposed to a source), you'd already have all of your other fields in place from the Flat File Source.
When working with IDT 4.1 and when making a query in business layer, is it possible to format the yielded numbers? What I mean is - the output looks something like "1.9982121921**E7**" (please noctice E7 part). I would like BO to display the whole number without any suffixes.
Additionally, it would be even better to add a delimiter after thousands, millions,...
Is 1.9982121921**E7** the value that is returned from the database? Thus not a number but a string (alphanumeric)? In that case, you'll have to change the select statement and use a database function to trim the non-numeric characters off (e.g. SUBSTR, MID, LEFT, …).
Once you have numeric data, you can use the Display Format function to change the layout of your object.
If you're not happy with the predefined formats to choose from, you can always define a custom format. The formatting options are described in the Information Design Tool User Guide (links to the documentation for IDT in BI 4.1 SP3), section 12.10.23 Creating and editing display formats for business layer objects.
Right-click and object and select Create Display Format… from the context menu.
Or click the Create Display Format… button in the object's properties (located in the Advanced tab).
Set the type to numeric and enable the high precision check mark.