Write Code in a Spreadsheet Using Visual Basic - json

I am new to Excel VBA and I am trying to achieve what is displayed in the Result column in the image. I need to develop a code in spreadheet which would create a json based on the occupation of an individual. Please advice how can I fill data in column H (Result) easily using visual basic.

As #ron rosenfeld mentioned you data is inconsistent from row to row. Is there some logic to decide what columns to use? You also don't need VBA to do this. You could use the concatenate function as below:
=CONCATENATE("{""",$A$1,""":""",A2,""",""",$B$1,""":""",B2,"""}")
The output for row two would be:
{"Occupation":"Doctor","FirstName":"Alex"}
If there are conditions you also use the IF function to only show certain columns. If it is more complex I would download the json.bas file from github and generate using that class.

Related

Advanced mapping of JSON in Azure Data Factory - some guidance requested

I'm trying to map a JSON document (sensor data) into a more meaningful representation using Mapping Dataflows. However, hard time getting this to work and would really appreciate some insight/recommendations on how to solve the following:
The input is
What I would like to end up with is the following:
Any pointers as to how this can be implemented are more than welcome.
This can be accomplished using the Copy activity and then split function in Derived Column transformation in Azure Data Factory.
Use the copy activity to read the JSON file as source and in sink, use SQL database to store the data as table. In Mapping tab, Import the schema and map the JSON records to the corresponding column names. Refer this third-part tutorial for guidance - https://sqlkover.com/dynamically-map-json-to-sql-in-azure-data-factory/
Finally, use the Data Flow activity and choose the SQL table as source now which you have used as sink above.
Select the Derived Column transformation.
Use split function.
Add the column which will take the split values which you want to split as shown below.
Use split(<column_name_to_split>, '_') function to split the column on with _ delimiter. Change <column_name_to_split> to the name of column you cant to split. Refer image below.
Preview the data to check the result.

SSIS package - How to rename pivoted columns

I'm new to SSIS and just created a simple package taking input from a source file then pivot it and insert into the database. It works well for me.
I am aware that I can provide an alias name for each column under Pivot > Advance Editor > Input and Output Properties > Pivot Default Output > Output Columns > Set the "Name" property to whatever I want. I want to ask if there is away to rename the pivoted column programmatically? I have about 100 columns and thought it is more effective to do this in code but not sure how. I tried to add a script component but not able to get to the "Name" property... My end goal is to remove the "C_" from the auto generated pivot column names. This way when I'm inserting the record to the db, it can auto map for me.
Your goal "rename columns dynamically in package itself" contradicts to basic SSIS approach, which is "fix metadata, including column type and name, at design phase, and reuse at runtime". So, no script component can rename your columns.
You can use BIML or EzAPI/Microsoft SSIS DLLs to generate package based on your metadata; but once you design it, the package metadata including column names is fixed.

PowerBI - Excel datasource contains JSON in column

I have an excel sheet that has three columns A, B and C.
A and B contain regular text. A firstname and lastname, if you will. The third column C contains JSON data.
Is there a way I can read this file into PowerBI and have it automatically parse out the JSON data into additional columns? In PowerBI Desktop Client, I can use an excel sheet as the datasource, and it loads in my data into the client, however it naturally treats column C as just text. I've had a look at the Advanced Editor and I'm thinking I might have to include something in there to help parse that out.
Any ideas?
I figured it out. In the query editor, right-click on the column that contains the JSON, go to Transform and select JSON. It will parse out the data, allowing you to add them in as additional column.
Extremely handy!

Graphical representation of HTML table data using High charts

I am trying to display the data in HTML table in graphical form using High Charts. When I use all the data from the table I am able to render the charts. I want to use only some part of data from my table (some columns data) to render a chart. Can anyone please tell me how to do this. If you have snippets please provide them. It will be helpful for me.
All 'docs' are included inline to the data.src.js module:
http://code.highcharts.com/modules/data.src.js
For now only startRow/endRow/startColumn/endColumn options are supported:
table : String|HTMLElement
A HTML table or the id of such to be parsed as input data. Related options ara startRow, endRow, startColumn and endColumn to delimit what part of the table is used.

How to export a flat file with different rows using SSIS?

I have tree tables, Customer, Invoice and InvoiceRow with the standard relations.
These I have to export in one fixed field length file with the first two characters of each row identifying the row type. The row types have different specifications.
I could probably do it with a nested loop in a script block, but this is my first ever SSIS package and that solution feels wrong.
edit:
The output has to have:
Customer
Invoice
Rows
Customer
Invoice
Rows
and so on
Your gut feeling on doing this using a Script Destination component is correct. Unfortunately, this scenario doesn't jive with SSIS well. I don't consider this a beginner package. If you must use SSIS then I'd start by inner joining all the data so there is one row for each InvoiceRow, containing the data needed from all three tables.
CustomerCols, InvoiceCols, RowCols
Then, in the script destination component you'll need to keep track of the customer and invoice values, as they change you'll need to write extra rows to the output.
See Creating a Destination with the Script Component for more information on script destination.
My experience shows that script destinations can have good performance.
I would avoid writing Script Destination, and use just Script Transform + Flat File Destination. This way, you concentrate on the logical output (strings of data), while allowing SSIS to do actual writing to the file (it might be a bit more efficient, plus you concentrate on your business, not on writing to files).
First, you'll need to get denormalized data. You can do joins and sorts in the DBMS, but if you don't want to put too much pressure on DBMS - just get sorted data out of it and merge it using two SSIS Merge Join transforms.
Then do the script: keep running values of current Customer and Invoice, output them when they change, output InvoiceRow on every input. Something like this:
if (this.CustomerID != InputBuffer.CustomerID) {
this.CustomerID = InputBuffer.CustomerID;
OutputBuffer.AddRow();
OutputBuffer.OutputColumn = "Customer: " + InputBuffer.CustomerID + " " + InputBuffer.CustomerName;
}
// repeat the same code for Invoice
OutputBuffer.AddRow();
OutputBuffer.OutputColumn = "InvoiceRow: " + InputBuffer.InvoiceRowPrice;
Finally, add a Flat File Destination with a single column (OutputColumn created by the script) to write this to the file.
Process your three tables so that the outputs are all appropriate for your output file (including the row type designator). You'll have to do this in three separate flow paths in your data flow, then bring the rows together in a Union All data flow element. From there, process them as needed to create your output file.