Accessing data imported into RavenDB via CSV import - csv

I have successfully imported geodata (originally from a shapefile, converted to CSV) into my RavenDB. I am now trying to access the data with a naive, simplistic select (sanity check to see if everything's there) but I can't get any data member values back. Since I am a total RavenDB newbie and haven't created the data myself (programmatically), my approach was to define a class that has the same name as what I find in Raven Studio (minus the automatically-appended plural 's') under Raven-Entity-Name, and to declare each of the data members to be of type string.
The query runs through and retrieves the first 128 results, but all the data members are null. I used this:
List<AdministrativeArea> AdministrativeArea = session
.Query<AdministrativeArea>()
.ToList();
Looking at the entries in Raven Studio, I can see that some of the data member values of the documents are coloured blue (so are probably already type-cast to be integer) but that shouldn't be the cause of ALL the data members showing up as null...
No exceptions are being thrown, and the query list contains elements. What am I doing wrong here ?
Thanks for your help !

The problem was the instantiation of int data members. Even when declaring the int members to be nullable, the problem of empty strings came up and prevented correct instantiation of the objects.
I supose that when CSV imports are used, and in some of the cases the field/data member comes up "empty" (but as a string type), and in other cases, you DO have numbers, then you have to resort to declaring them all to be strings. The only other solution I could think of is to adapt the CSV import code, for which I am still too new to RavenDB to think about.

Related

JSON flattening in AWS Glue ETL job creates inferred schema with duplicated columns

I'm relatively new to AWS Glue and using the visual AWS Glue studio at the moment. Kind of a niche issue I'm having here...
Context:
I'm building an ETL job that, among other things, should parse/flatten json from a string column to replace it with different fields in different format which I can select to load in my datawarehouse table.
Approach:
I first extract my data from the Glue catalog as a dynamicFrame (in this case only one table).
Then I'm trying to use the approach of unboxing and unnesting.
Let's call that json column data:
def transformTable (glueContext, dfc) -> DynamicFrameCollection:
dyf = dfc.select(list(dfc.keys())[0])
dyf = Unbox.apply(frame=dyf, path="data", format="json")
dyf = UnnestFrame.apply(frame=dyf)
return DynamicFrameCollection({"TranformedTable": dyf}, glueContext)
(Then I have a step to select the right frame from the frame collection, and then I can apply mapping to my fields and load.)
My issue:
Glue automatically infers the data types of the my frame schema (rather successfully)
but it duplicates certain fields into several when the data type is unclear (similar to make_cols in the resolveChoice method), e.g. I end up with 2 fields in the output schema price_int and price_double, where price_int contains only the values that were round numbers by chance and null values everywhere else, etc.
So it seems like the default behavior of this method is to split columns in case of data type doubt (make_cols).
I understand that I could write a resolveChoice for each field, but with this approach they are already split in separate columns in the output schema.
Note: There are dozens of fields in this json, so I'm trying to devise a blanket solution that automatically makes all the fields of the json available in the schema to select and map in the next step, and avoid having to add one line of code for each field I want to extract. (And the json structure will grow with new fields in the future, so I'm trying to limit future ETL maintenance...)
Questions/help needed:
Any idea if there's a way to change this default behavior (like in the resolveChoice method)?
Alternatively, is there a way to apply a kind of default resolveChoice to all problematic fields from the json unboxing? For instance, I could force all problematic fields into string (similar to 'project:string'), and then reformat if needed in the applyMapping step. But resolveChoice seems to need to be applied field by field...
What's a different/better approach I could try? I would like to keep it as dynamic/automated as possible... e.g.:
I think I could maybe extract specific fields from the JSON line by line, but I'm not sure how (looks like the Unbox method is already splitting columns by format). And as explained, it's dozens of fields and growing... so it requires updating the code regularly, instead of just ticking boxes in the list of available fields.
TheRelationalize method could be an option, but it creates distinct frames and this quickly becomes much more complex (there are actually several columns with json, which all need to be flattened...).
Creating crawlers or classifiers which are run automatically regularly for extracting the schema from that specific string column from a table should be an option as well...
Thanks in advance!

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

SSIS 2012 Full Result Set to set variables

I'm trying to create an SSIS package that reads a mapping table that contains foreign key information and tables they point to and store the full result set to be used to populate 7 columns representing columns in the result set that is then used to update an xxxSID column on 6 servers.
I'm stuck! Please help.
I've created the SQL Task with query to build the result set and mapped to object variable SidMap and the task runs successfully however, I don't know where to go from there. Some blogs say create a ForEachLoop Container and map the object variable to the collection which I've done. I've also created string variables representing the 7 columns but don't know how to populate them.
The blogs I've read so far suggest this can only be done from a Script task. Is that true? If so how is it done?
Another user posted a question that sounded like he may be doing the same or very similar thing using a SQL Task but I didn't see how he was populating the column object variables and then converting data into string variables.
SSIS Result set, Foreachloop and Variable
Currently I'm updating tables manually using a cursor. If anyone cares to see the code I can post it but didn't think it relevant to the question other than providing a clear picture of what I'm doing.
I would create a For Each Loop Container using the Foreach ADO Enumerator, and map the object variable to the collection. I would map the 7 string variables on the Variable Mappings page.
This process is documented in detail here:
http://technet.microsoft.com/en-us/library/cc879316.aspx
A common "gotcha" is mismatched datatypes between the result set and the Variables. To avoid this I always wrap CAST ( ... AS NVARCHAR ( 4000 ) ) or similar around the columns in the dataflow that produces the dataset, and all my receiving Variables are String datatype.

How to access all the entries in MySQL table in Django View?

I am designing a Web Application using Django Framework. I have written the model code, urls.py and view code which can be seen Here.
I have added some data into the database table. But when I try to access the object using the code below, it just shows bookInfo objects five times. I don't think I am successful enough in pulling the data from the database. Kindly help.
View
def showbooks(request):
booklist = bookInfo.objects.order_by('Name')[:10]
output = ','.join([str(id) for id in booklist])
return HttpResponse(output)
You are iterating through the object list, you just need to reference the column/attribute you want:
output = ','.join([obj.id for obj in booklist])
Alternatively you can more more finely craft you original db call, then the iterable you use will work. In this case we'll pull out a list of the 'id' attribute.
booklist = bookInfo.objects.order_by('Name').values_list('id', flat=True)[:10]
output = ','.join([id for id in booklist])
I think you are successful in pulling the data. It is just that booklist contains objects, not numeric ids. You can add __unicode__ method to you class BookInfo that is supposed to return a string representation of the object (probably book name in this case). This method is going to be invoked when str() is applied. You can find more info about __unicode__ here.

"Diffing" objects from a relational database

Our win32 application assembles objects from the data in a number of tables in a MySQL relational database. Of such an object, multiple revisions are stored in the database.
When storing multiple revisions of something, sooner or later you'll ask yourself the question if you can visualize the differences between two revisions :) So my question is: what would be a good way to "diff" two such database objects?
Would you do the comparison at the database level? (Doesn't sound like a good idea: too low-level, and too sensitive to the schema).
Would you compare the objects?
Would you write a function that "manually" compares the properties and fields of two objects?
How would you store the diff? In a separate, generic "TDiff" object?
Any general recommendations on how to visualize such things in a user interface?
Advice, or stories about your own experiences with this, are very welcome; thanks a bunch!
Extra info on use case (20090515)
In reply to Antony's comment: this specific application is used to schedule training courses, run by teams of teachers. The schedule of a teacher is stored in various tables in the database, and contains info such as "where does she have to go on which day", "who are her colleagues in the team", etc. This information is spread out over multiple tables.
Once in a while, we "publish" the schedule, so the teachers can see it on a webpage. Each "publication" is a revision, and we'd like to be able to show the users (and later also the teachers) what's changed between two publications --- if anything.
Hope that makes the scenario a bit more tangible :)
Some final remarks
Well, the bounty has come to an end, so I've accepted an answer. If it'd somehow be possible to slice a couple of extra 100's off of my rep and give it to some of the other answers, I would do so without hesitation. All your guys' help has been great, and I am very grateful! ~ Onno 20090519
Just an idea, but would it be worthwhile for you to convert the two object versions being compared to some text format and then comparing these text objects using an existing diff program - like diff for example? There are lots of nice diff programs out there that can offer nice visual representations, etc.
So for example
Text version of Object 1:
first_name: Harry
last_name: Lime
address: Wien
version: 0.1
Text version of Object 2:
first_name: Harry
last_name: Lime
address: Vienna
version: 0.2
The diff would be something like:
3,4c3,4
< address: Wien
< version: 0.1
---
> address: Vienna
> version: 0.2
Assume that a class has 5 known properties - date, time, subject, outline, location. When I look at my schedule, I'm most interested in the most recent (ie current/accurate) version of these properties. It would also be useful for me to know what, if anything, has changed. (As a side note, if the date, time or location changed, I'd also expect to get an email/sms advising me in case I don't check for an updated schedule :-))
I would suggest that the 'diff' is performed at the time the schedule is amended. So, when version 2 of the class is created, record which values have changed, and store this in two 'changelog' fields on the version 2 object (there must already be one parent table that sits atop all your tables - use that one!). One changelog field is 'human readable text' eg 'Date changed from Mon 1 May to Tues 2 May, Time changed from 10:00am to 10:30am'. The second changelog field is a delimted list of changed fields eg 'date,time' To do this, before saving you would loop over the values submitted by the user, compare to current database values, and concatenate 2 strings, one human readable, one a list of field names. Then, update the data and set your concatenated strings as the 'changelog' values.
When displaying the schedule load the current version by default. Loop through the fields in the changelog field list, and annotate the display to show that the value has changed (a * or a highlight, etc). Then, in a separate panel display the human readable change log.
If a schedule is amended more than once, you would probably want to combine the changelogs between version 1 & 2, and 2 & 3. Say in version 3 only the course outline changed - if that was the only changelog you had when displaying the schedule, the change to date and time wouldn't be displayed.
Note that this denormalised approach won't be great for analysis - eg working out which specific location always has classes changed out of it - but you could extend it using an E-A-V model to store the change log.
Doing a comparison at the database level would be good if what you cared about was changes to the database. That makes the most sense if you're trying to design a layer of generic functionality on top of the database itself.
Doing a comparison at the object level would be good if you care about changes to the data. For example, if the data was the input to a program and you were interested in looking at changes in the input to verify that changes to the output were correct.
Your use case doesn't appear to be either of these. You appear to care about the output and want differences from that perspective. If that's the case, I would do differences on the output report (or a pure-text version of it) instead of on the underlying data. You can do that with any off-the-shelf diff tool. To make things easier for your end-users you could parse the diff results and render them as HTML. There are lots of options here: side-by-side with color coding to indicate changes, one document with markup for changes (e.g. red strikethrough for deletions and green for additions), maybe just highlight areas that have changed and use balloons to show the previous/current values on demand.
I've thought about doing database comparisons but never tried to implement it. As you noted, any such attempts are intimately intertwined with the schema.
I have done object-level comparisons. The general algorithm was this:
Do a set comparison on the lists of object IDs. This creates three result groupings: added objects, deleted objects, and objects that live in both sets.
Report the deletions.
Report the additions.
For the things in both sets, do an attribute-by-attribute comparison.
If any differences are found, report the object ID, the attributes that differ, and the respective values. If appropriate, highlight the portion of the attribute value that has changed.
In my case, the comparison algorithms were hand-written to match the object attributes. This gave me control over which attributes were compared and how. A generic comparator might be possible for some cases but would depend on the situation and at least partially on the implementation language.
I've looked into MysQL Diffing a number of times. Unfortunately, there aren't any really good solutions available.
One tool I've tried was mysqldiff (www.mysqldiff.org). mysqldiff is a tool written in PHP which is capable of diffing mysql schemas. Unfortunately, it doesn't do a great job a lot of the time.
MySQL Workbench, MySQLs own SQL IDE provides the option to generate an alter script and I would imagine it does this by performing some kind of diff operation internally.
Aqua Data Studio is another tool that is capable of comparing schemas and outputing a diff of the two. While the ADS diff is quite nice, it does not provide a tool to create an alter script.
If I were writing my own I guess I would write code capable of comparing structure of two tables. Such code could be tuned to be highly sensitive (Ig if column order differs from from version to the next, it's a difference) or more moderately sensitive (Eg Column order is not a major issue, datatypes and lengths are important, as are indices and constraints).
Storage, I'm not to sure. I would look into how a version control system such as Mercurial stores its diff information for revisions and use that to elaborate a method appropriate for the DB.
Finally, for visual output I recommend you take a look at the Aqua Data Stduio compare feature (You can use the Trial version to test this...). Its diff output is pretty good.
My application dbscript compares hierarchical data (database schemas) in a stored procedure, which of course has to compare each field/property of every object with its counterpart. I guess you won't get around that step (unless you have a generic object description model)
As for the UI part of your question, have a look at screenshots to view and select differences.
I would think about some sort of common text representation of the objects and let the texts compare with an existing diffing tool like WinMerge.
I see no need to invent diffing by myself since there are already plenty of nice tools I can use.
In your situation in PostgreSQL I used a difference tables with the schema:
history_columns (
column_id smallint primary key,
column_name text not null,
table_name text not null,
unique (table_name, column_name)
);
create temporary sequence column_id_seq;
insert into history_columns
select nextval('column_id_seq'), column_name, table_name
from information_schema.columns
where
table_name in ('table1','table2','table3')
and table_schema=current_schema() and table_catalog=current_database();
create table history (
column_id smallint not null references history_columns,
id int not null,
change_time timestamp with time zone not null
constraint change_time_full_second -- only one change allowed per second
check (date_trunc('second',change_time)=change_time),
primary key (column_id,id,change_time),
value text
);
And on the tables I used a trigger like this:
create or replace function save_history() returns trigger as
$$
if (tg_op = 'DELETE') then
insert into historia values (
find_column_id('id',tg_relname), OLD.id,
date_trunc('second',current_timestamp),
OLD.id );
[for each column_name] {
if (char_length(OLD.column_name)>0) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
)
}
elsif (tg_op = 'UPDATE') then
[for each column_name] {
if (OLD.column_name is distinct from NEW.column_name) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
);
end if;
}
end if;
$$ language plpgsql volatile;
create trigger save_history_table1
before update or delete on table1
for each row execute procedure save_history();
This isn't really an answer to the question you asked rather an attempt to re-imagine the problem. Would you consider altering your database and object model to store the aggregate root and a series of deltas? That is, model and store RevisionSets that are collections of Revisions; a Revision is an entity property paired with a value. In a sense this is internalizing the revision structure into your architecture that the other posters are suggesting that you bolt-on to what you already have via "logs".
It's trivial to display the aggregate from the deltas, and even easier to display the deltas as a change history. The fact that you are using a rich client with state and local memory makes this even more compelling. You could very easily display "all the changes since date xxxx" without revisiting the database.
Credit for the basic idea goes to Greg Young and his work with financial data streams, but it is imminently applicable to your problem.
I'm riffing off of what Harry Lime suggested: Output your properties to text format, then hash the results. That way you can compare the hash values and easily flag the data that has been altered. This way you get the best of both worlds as you can visually see differences but programmatically identify differences. With the has you'll have a good source for an index should you want to store and retrieve the deltas.
Given you want to create a UI for this and need to indicate where the differences are, it seems to me you can either go custom or create a generic object comparer - the latter being dependent on the language you are using.
For the custom method, you need to create a class that takes to two instances of the classes to be comparied. It then returns differences;
public class Person
{
public string name;
}
public class PersonComparer
{
public PersonComparer(Person old, Person new)
{
....
}
public bool NameIsDifferent() { return old.Name != new.Name; }
public string NameDifferentText() { return NameIsDifferent() ? "Name changed from " + old.Name + " to " + new.Name : ""; }
}
This way you can use the NameComparer object to create your GUI.
The gereric approach would be much the same, just that you generalize the calls, and use object insepection (getObjectProperty call below) to find differences;
public class ObjectComparer()
{
public ObjectComparer(object old, object new)
{
...
}
public bool PropertyIsDifferent(string propertyName) { return getObjectProperty(old, propertyName) != getObjectProperty(new, propertyName) };
public string PropertyDifferentText(string propertyName) { return PropertyIsDifferent(propertyName) ? propertyName + " " + changed from " + getObjectProperty(old, propertyName) + " to " + getObjectProperty(new, propertyName): ""; }
}
}
I would go for the second, as it makes things really easy to change GUI on needs. The GUI I would try 'yellowing' the differences to make them easy to see - but that depends on how you want to show the differences.
Getting the object to compare would be loading your object with the initial revision and latest revision.
My 2 cents... Not as techy as the database compare stuff already here.
Have you looked at Open Source DiffKit?
www.diffkit.org
I think it does what you want.
Example with Oracle.
Export ordered objects to text with dbms_metadata
Export ordered tables data into CSV or query format
Make big text file
Diff