Hyperledger Composer System Namespaces - namespaces

I found the following sentence in the Hyperledger Composer Modeling language documentation about system namespaces.
Could anyone explain the meaning of the following sentence?
If you have defined an event or transaction including an eventId, transactionId, or timestamp, you must delete the eventId, transactionId, or timestamp properties.
Link to reference

eventId, transactionId, or timestamp properties are autmatically appended by HYperleger Fabric in the background . So if you see at the Rest loopback interface you can see there is transaction id and timestamp present there , but as a client running transaction you don't need to provide those parameters, and it will be auto filled once transaction is completed . When you will read these transaction you will be able to see the timestamp and transaction id.

Related

Can I do Change Data Capture with MariaDb's Automatic Data Versioning

We're using MariaDb in production and we've added a MariaDb slave so that our data team can perform some ETL tasks from this slave to our datawarehouse. However, they lack a proper Change Data Capture feature (i.e. they want to know which rows from the production table changed since yesterday in order to query rows that actually changed).
I saw that MariaDb's 10.3 had an interesting feature that allowed to perform a SELECT on an older version of a table. However, I haven't found resources that supported the idea that it could be used for CDC, any feedback on this feature?
If not, we'll probably resort to streaming the slave's binlogs to our datawarehouse but that looks challenging..
Thanks for your help!
(As a supplement to Stefans answer)
Yes, the System-Versioning can be used for CDC because the validity-period in ROW_START (Object starts to be valid) and ROW_END (Object is now invalid) can be interpreted when an INSERT-, UPDATE- or DELETE-query happened. But it's more cumbersome as with alternative CDC-variants.
INSERT:
Object was found for the first time
ROW_START is the insertion time
UPDATE:
Object wasn't found for the first time
ROW_START is the update time
DELETE:
ROW_END lies in the past
there is no new entry for this object in the next few lines
I'll add a picture to clarify this.
You can see that this versioning is space saving because you can combine the information about INSERT and DELETE of an object in one line, but to check for DELETEs is costly.
In the example above I used a Table with a clear Primary Key. So a check for the-same-object is easy: just look at the id. If you want to capture changes in talbes with an key-combination this can also make the whole process more annoying.
Edit: another point is that the protocol-Data is kept in the same table as the "real" data. Maybe this is faster for an INSERT than known alternativ solution like the tracking per TRIGGER (like here), but if changes are made quite frequent on the table and you want to process/analyse the CDC-Data this can cause performance problems.
MariaDB supports System-Versioned Tables since version 10.3.4. System version tables are specified in the SQL:2011 standard. They can be used for automatically capturing previous versions of rows. Those versions can then be queried to retrieve their values as they have been set at a specific point in time.
The following text and code example is from the official MariaDB documentation
With system-versioned tables, MariaDB Server tracks the points in time
when rows change. When you update a row on these tables, it creates a
new row to display as current without removing the old data. This
tracking remains transparent to the application. When querying a
system-versioned table, you can retrieve either the most current
values for every row or the historic values available at a given point
in time.
You may find this feature useful in efficiently tracking the time of
changes to continuously-monitored values that do not change
frequently, such as changes in temperature over the course of a year.
System versioning is often useful for auditing.
With adding SYSTEM VERSIONING to a newly created or an already existing table (using ALTER), the table will be expanded by row_start and row_end time stamp columns which allow retrieving the record valid within the time between the start and the end timestamps.
CREATE TABLE accounts (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255),
amount INT
) WITH SYSTEM VERSIONING;
It is then possible to retrieve data as it was at a specific time (with SELECT * FROM accounts FOR SYSTEM_TIME AS OF '2019-06-18 11:00';), all versions within a specific time range
SELECT * FROM accounts
FOR SYSTEM_TIME
BETWEEN (NOW() - INTERVAL 1 YEAR)
AND NOW();
or all versions at once:
SELECT * FROM accounts
FOR SYSTEM_TIME ALL;

How to affectively lock/block rows?

I am using an api to develop an inventory system for a company. I want to be able to allow the first person to select a row(s) from the mysql database to have a lock. The second person should be denied any data from the set of rows the first user has. Is that even possible?
The use case, the information in the database is constantly being added or updated by users. If user A does a select it will always be followed by an update. But if user B selects the information updates it before user A is done, all the work from user B will be lost when user A is done or visa versa.
I have tried to use transactions but it is not stopping a second user from getting the row the first user requests.
start transaction;
select * from peak2_0.staff where 'First Name'='Aj';
update peak2_0.staff set `First Name` = 'aj' where 'First Name'='Aj';
commit;
As I mentioned in the comments, you can create a field (or two) for "locking" the entry while a user is working on it; more of a "down for maintenance" indicator of sorts, than an actual server lock. You can even make it atomic and recoverable with something like this:
UPDATE someTable
SET locked_by = client_or_user_id, locked_when = now()
WHERE [criteria for selected the record(s) being worked on]
AND locked_by IS NULL
;
You can then select from the table to see if it got your program client id or users id for the lock. "Recoverable" in the sense that, should the client system go down before unlocking the data, a routine process (client side, or MySql event), can release any locks older than a certain amount of time. Alternatively, the original update, and anything that is trying to respect locks can have the standard lock checking condition be tweaked to something like AND (locked_by IS NULL OR locked_when < now() - INTERVAL 15 MINUTE)
If an editing client needs to hold a lock for longer, it can do so just by updating locked_when values further; or you could also/alternatively use a "lock until" field.
Optionally, you could even add a lock reason so clients attempting to access such an entry can be informed why it is unavailable.

easiest way to know when a MySQL database was last accessed

I have MySQL tables that are all InnoDB.
We have so many copies of various databases spread across multiple servers (trust me we're talking hundreds here), and many of them are not being queried at all.
How can I get a list of the MAX(LastAccessDate) for example for all tables within a specific database? Esp. considering that they are InnoDB tables.
I would prefer knowing even where the "select" query was run, but would settle for "insert/update" as well, since, if a db hasn't changed in a long time, it's probably dead.
If you have a table that always gets values inserted you can add a trigger to the update/insert. Inside this trigger you can set the current timestamp in a dedicated database, including the name of the database from which the insert took place.
This way the only requirement of your database is that it supports triggers.
Alternatively you could take a look this link:
odify date and create date for a table can be retrieved from sys.tables catalog view. When any structural changes are made the modify date is updated. It can be queried as follows:
USE [SqlAndMe]
GO
SELECT [TableName] = name,
create_date,
modify_date
FROM sys.tables
WHERE name = 'TransactionHistoryArchive'
GO
sys.tables only shows modify date for structural changes. If we need to check when was the tables last updated or accessed, we can use dynamic management view sys.dm_db_index_usage_stats. This DMV returns counts of different types of index operations and last time the operation was performed.
It can be used as follows:
USE [SqlAndMe]
GO
SELECT [TableName] = OBJECT_NAME(object_id),
last_user_update, last_user_seek, last_user_scan, last_user_lookup
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('SqlAndMe')
AND OBJECT_NAME(object_id) = 'TransactionHistoryArchive'
GO
last_user_update – provides time of last user update
last_user_* – provides time of last scan/seek/lookup
It is important to note that sys.dm_db_index_usage_stats counters are reset when SQL Server service is restarted.
Hope This Helps!

Log all events on all tables with MySQL

I need to log all events on all tables in table database_log (id, user, timestamp, tablename, old_value, new_value).
I thought I can create the same trigger on all tables (~25) with a little php script dynamically replace the name's table. But in this case I can retrieve the old and new value, because all tables haven't the same columns so I can't just concat all field for store in the "old_value" and "new_value" (even if I retrieve fields in schema because I can't use a concat() on it for select all value and store in variable).
For exemple a :
SELECT * into v_myvar FROM my_table where id=OLD.id;
CALL addLog(v_myvar)
Where addLog is procedure taking my old value and add a line with other informations, could save my life.
So, I'm looking for a sexy solution with one trigger and/or one procedure (by table) or a useful tool. Someone have a solution ?
Thanks
SET GLOBAL general_log_file = '/var/log/mysql/mysql.log';
The general query log is a general record of what mysqld is doing. The server writes information to this log when clients connect or disconnect, and it logs each SQL statement received from clients.
See the MySql Documentation

How to get the value from step1 to step2 in sql Job

I need to create a SQL JOB.
Step1:
Insert a Row into TaskToProcess Table and return ProcessID(PK and Identity)
Step2:
Retrive the ProcessID which is generated in step1 and pass the value to SSIS package and execute the SSIS Package.
Is this Possible in SQL server JOB??
Please help me on this
Thanks in advance.
There is no built-in method of passing variable values between job steps. However, there are a couple of workarounds.
One option would be to store the value in table at the end of step 1 and query it back from the database in step 2.
It sounds like you are generating ProcessID by inserting into a table and returning the SCOPE_IDENTITY() of the inserted row. If job step 1 is the only process inserting into this table, you can retrieve the last inserted value from job 2 using the IDENT_CURRENT('<tablename>') function.
EDIT
If multiple process could insert into your process control table, the best solution is probably to refactor steps 1 and 2 into a single step - possibly with a controlling SSIS master package (or other equivalent technology) which can pass the variables between steps.
Similar to Ed Harper's answer, but some details found in "Variables in Job Steps" MSDN forum thread
For the job environment, some flavor of Process-Keyed Tables (using
the job_id) or Global Temporary Tables seems most useful. Of course,
I realize that you might not want to have something left 'globally'
available. If necessary, you could also look into encrypting or
obfuscating the value that you store. Be sure to delete the row once
you have used it.
The Process-Keyed Tables are described in article "How to Share Data between Stored Procedure"
Another suggestion in Send parameters to SQL server agent jobs/job steps MSDN forum thread to create a table to hold the parameters, such as:
CREATE TABLE SQLAgentJobParms
(job_id uniqueidentifier,
execution_instance int,
parameter_name nvarchar(100),
parameter_value nvarchar(100),
used_datetime datetime NULL);
Your calling stored procedure would take the parameters passed to it
and insert them into SQLAgentJobParms. After that, it could use EXEC
sp_start_job. And, as already noted, the job steps would select from
SQLAgentJobParms to get the necessary values.

Categories