Time uploading from excel sheet to mysql - mysql

I have to insert time values from excel to mysql like in the image below.
int punchin = (int) row.getCell(1).getNumericCellValue();
System.out.println("::::::::::::::::: "+punchin);
int punchout = (int) row.getCell(2).getNumericCellValue();
System.out.println("::::::::::::::::: "+punchout);
int duration = (int) row.getCell(3).getNumericCellValue();
System.out.println("::::::::::::::::: "+duration);
But I couldn't get the proper value.

Use MySQL for Excel: https://dev.mysql.com/downloads/windows/excel/
It's really easy to use. Once you have installed it, in Excel go to the Data tab and at the end will be Mysql for Excel.
Assuming you've now setup connection to your database, all you now have to do is the following:
Select all the data excluding the columns.
On the side panel, click the database table that you will be putting the data into.
Click Append Excel Data to Table.
Map any columns in the data to your database columns using the tool provided at this poiont.
Click append and wait.
Your data should now be uploaded.

Related

Write into Excel Destination from SSIS variables

I have 3 SSIS variables namely name, age, gender with initial values set. I want to write these values into excel sheet in one row. Later I will extend this to Array of records.
To do this I have created Excel connection attaching the excel sheet where I want to write.
I added control flow task and double clicked and then added Derived column component to create derived columns for each of above 3 variables . Inside derived column editor I selectd above variables as new derived columns.
And then pipelined excel destination component and mapped sheet columns to derived columns. I executed the SSIS package and its successful. But variables are not written into excel sheet.
What I am doing wrong ?
Again, you need a source. I gave you an "easy" solution. This is probably the best solution to your problem:
This time the source will be a script component (select Source).
Steps after you add Script Component:
Select Source
Go to Inputs and Outputs
Add your Output Columns (Don't forget about data types)
Go back to Script
Add you variables (Gender, Name and Age)
Go into Script
Add the following code
public override void CreateNewOutputRows()
{
Output0Buffer.AddRow();
Output0Buffer.Age = Variables.Age;
Output0Buffer.Gender = Variables.Gender;
Output0Buffer.Name = Variables.Name;
}
You need a source. the easiest would be to use a SQL connection.
Use a variable of type string named SQL.
Set SQL = "Select '" + name+ "' as name,"+ age + "as age,'" + gender + "' as Gender
Set your source to SQL variable.
Connect this Source to Destination and you should have 1 row with 3 columns
Listing the steps clearly as suggested by #KeithL
Create a SSIS variable selectQueryVariables with string datatype.
Assign variable expression as
"SELECT '"+#[User::name]+"' as Name,'"+#[User::gender]+"' as Gender,"+(DT_WSTR,4 )#[User::age]+" as Age"
Add OLE DB Source component and set data access mode as SQL command from variable and select the variable selectQueryVariables in dropdown. Now the source is ready with 3 columns Name, Age and Gender.
Pipeline this with Excel Destination and map columns source and destination.

Flask SQL-Alchemy query is returning null for data that exists in my database. What could be the cause

My python program is meant to query my MySQL database for a record. The record exists in the database and contains data but the program returns null values. The table that gets queried is titled Market. In that table there is a column titled market_cap and a column titled volume. When I use MySQLWorkbench to query my database, the result shows that there is data in the columns. However, the program receives null.
Attached are two images (links, because I need to earn 10 reputation points to embed images in a post):
MySql database column image
shows a view of the column in my database that is having issues.
From the image, you can see that the data I need exists in my database.
Code with results from Pycharm debugger
Before running the debugger, I set a breakpoint right after the line where
the code queries the database for an object. Image two shows the output I
received when the code queried the database.
Screenshot of the Market Model
Screenshot of the solution I found out that converting the market cap(market_cap) before adding it to the dictionary(price_map) returns the correct value. You can see it in line 138.
What could cause existent data in a record to be returned as null?
import logging
from flask_restful import Resource
from api.resources.util.date_util import pretty_date_to_epoch,
epoch_to_pretty_date
from common.decorators import log_exception
from db.models import db, Market
log = logging.getLogger(__name__)
def map_date_to_price():
buy_date_list = ["2015-01-01", "2015-02-01", "2015-03-01"]
sell_date_list = ["2014-12-19", "2014-01-10", "2015-01-20",
"2016-01-10"]
date_list = buy_date_list + sell_date_list
market_list = []
price_map = {}
for day in date_list:
market_list.append(db.session.query(Market).
filter(Market.pretty_date == day).first())
for market in market_list:
price_map[market.pretty_date] = market.market_cap
return price_map
The two fields that are (apparently) being retrieved as null are both db.Numeric. http://docs.sqlalchemy.org/en/latest/core/type_basics.html notes that these are, by default, backed up by a decimal.Decimal object, which I'll bet can't be converted to JSON, so what comes back form Market.__repr__() will show them as null.
I would try adding asdecimal=False to the two db.Numeric() calls in Market.

MySQL summary status table to store results of operations

I have a MySQL (5.6) database on my local workstation into which I routinely pull large datasets to perform analysis on. I have a separate SQL script for each dataset that imports the data and reformats it when needed (notably to convert date formats). In addition, I have other scripts that perform detailed analysis on the data.
For quality assurance, I would like to have a table named ImportLog that stores a record to capture the result of each import that is run. This table would look like the following:
ImportName DateRun RowsImported
---------- ------- ------------
ImportASR 2015-08-29 12902
ImportEAD 2015-08-30 18023
ImportHRData 2015-08-30 122376
The column definitions for ImportLog are as follows:
ImportName // the name of the script that is run
DateRun // the date that the script is run
RowsImported // the count of records imported in the run.
At the very end of each script would be the code to write one line to this table with the relevant data. For example, let's say that I ran the script named ImportASR on 8/29/2015 and it imported 12,902 records. At the end of the script, I want to append one record to ImportLog (like the first record in the table above) using something like this:
INSERT INTO ImportLog
VALUES("ImportASR",$DateRun,$RowCunt);
Every time I run one of the import scripts, it would add a row to the ImportLog table with the appropriate data.
My question is: How do I populate the $DateRun variable with the current date and the $RowCount variable with the row count of the newly imported ASR dataset? Or am I trying to approach this from the wrong angle?
First thing this morning I stumbled upon the answer to my problem; it was amazingly simple, and to my surprise it didn't need the use of any variables. The code to put at the end of each import script is something like:
INSERT INTO ImportLog
"Script: ImportASR",
SELECT NOW(),
(SELECT COUNT(*) FROM ASR_Full);
The InportLog table is initially defined like so:
CREATE TABLE LPIS_SearchMatchLog (
ImportName VARCHAR(25),
DateRun DATETIME,
RowCount INT
);
Hope this helps someone else!

convert Image column is very slow

I want to convert data from old database to new database with new structure.
in old database I have attachment table that must be convert to attachment table in new database.
old database attachment table structure is below:
Attachment (ID int, Image Image, ...)
and new database attachment table structure is below :
Attachment (ID int, Image Image, OldID Int, ...)
each time I execute convert package copy only not exists data (new data) from old database to new database.
I use below format for do it :
lookup between old table and new table (ID --> OldID) for check exists record.
When I run SSIS Packages; SSIS, first cache all lookups and source component data in memory then execute package. my source data in this package is very huge and when I run this package it will be run very slowly. I want to get Image column data from old database for each new record after lookup for check exists component. if I use new lookup component for get image column data from old database, SSIS cache this new lookup data and execution time of run this package not change. what must I do?
thanks in advance.
Are you sure you're thinking this through correctly? SSIS should not be slow even if the amount of data you are loading is huge.
Your LOOKUP component needs to make sure it's not doing anything it doesn't need to. If you are pointing it to the table in the new database, change it to a SQL Query at once. In this query you only need to SELECT OldId FROM tbl and point the incoming ID from old database to this. Your data flow should contain ID and Image from Old database, which is mapped ID -> OldIdand "Image -> Image` in your OLE DB Destination. No more is needed for "Insert new rows only" operation like you are doing here.
For this job, there is no need for any custom code or dynamic SQL. You -do- want to get the ID and Image from your source system in the data flow (unless you have major network bottlenecks to sort out) - doing a RBAR lookup to get the image data from the old system is a very backwards way of thinking your ETL.
Select only ID from source table
Do lookup in destination db with no change
For its no match output do lookup in source table, with Cache Mode set to No cache, which will append Image to the flow.
In this case each image will be fetched separately, which may affect performance.
You may also do it in two Data Flows.
In first:
Select only ID from source table
Do lookup in destination db with no change
Store new Ids in string variable IdListToBeFetched as comma separated list using Srcipt Component as destination witch code similar to:
using System.Text;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
StringBuilder sb;
public override void PreExecute()
{
base.PreExecute();
sb = new StringBuilder();
}
public override void PostExecute()
{
base.PostExecute();
Variables.IdListToBeFetched = sb.ToString().TrimEnd(',');
}
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (!Row.ID_IsNull)
{
sb.AppendFormat("{0},", Row.ID);
}
}
}
In second Data Flow set sql command of source to dynamic generated query from expression similar to "select ID, Image from Attachment where ID in (" + #[User::IdListToBeFetched] + ")" and set DelayValidation = True. It will take all Images in single select which should be faster.
To set dynamic generated query as SqlCommand in sources like ADO NET Source or ODBC Source:
select Expression property of Data Flow Task containing your source
find property [your source name].[SqlCommand] and set expression here
To set dynamic generated query as sql command in OLE DB Source (taken from Jamie Thomson blog):
Create a new variable called SourceSQL
Open up the properties pane for SourceSQL variable (by pressing F4)
Set EvaluateAsExpression=TRUE
Set Expression to "select ID, Image from Attachment where ID in (" + #[User::IdListToBeFetched] + ")"
For your OLE DB Source component, open up the editor
Set Data Access Mode="SQL Command from variable"
Set VariableName = "SourceSQL"

How to import a fixed width flat file into database using SSIS?

Does any one have a tutorial on how to import a fixed width flat file into a database using an SSIS package?
I have a flat file containing columns with varying lengths.
Column name Width
----------- -----
First name 25
Last name 25
Id 9
Date 8
How do I convert a flat file into columns?
Here is a sample package created using SSIS 2008 R2 that explains how to import a flat file into a database table.
Create a fixed-width flat file named Fixed_Width_File.txt with data as shown in the screenshot. The screenshot uses Notepad++ to display the file contents. It has the capability to show the special characters like carriage return and line feed. CR LF denotes the row delimiters Carriage return and Line feed.
In the SQL server database, create a table named dbo.FlatFile using the create script provided under SQL Scripts section.
Create a new SSIS package and add a new OLE DB Connection manager that would connect to the SQL Server database. Let's assume that the OLE DB Connection manager is named as SQLServer.
On the package's control flow tab, place a Data Flow Task.
Double-click on the data flow task and you will be taken to the data flow tab. On the data flow tab, place a Flat File Source. Double-click on the flat file source and the Flat File Source Editor will appear. Click the New button to open the Flat File Connection Manager Editor.
On the General section of the Flat File Source Editor, enter a value in Connection manager name (say Source) and browse to the flat file location and select the file. This example uses the sample file in the path C:\temp\Fixed_Width_File.txt If you have header rows in your file, you can enter a value 1 in the Header rows to skip textbox to skip the header row.
Click on the Columns section. Change the font according to your choice I chose Courier New so I could see more data with less scrolling. Enter the value 69 in the Row width text box. This value is the sum of width of all your columns + 2 for the row delimiter. Once you have set the correct row width, you should see the fixed width file data correctly on the Source data columns section. Now, you have to click at the appropriate locations to determine the column limits. Note the sections 4, 5, 6 and in the below screenshot.
Click on the Advanced section. You will notice 5 columns created for you automatically based on the column limits that we set on the Columns section in the previous step. The fifth column is for row delimiter.
Rename the column names as FirstName, LastName, Id, Date and RowDelimiter
By default, the columns will be set with DataType string [DT_STR]. If we are fairly certain, that a certain column will be of different data type, we can configure it in the Advanced section. We will change Id column to be of data type four-byte signed integer [DT_I4] and Date column to be of data type date [DT_DATE]
Click on the Preview section. The data will be shown as per the column configuration.
Click OK on the Flat file connection manager editor and the flat file connection will be assigned to the Flat File Source in the data flow task.
On the Flat File Source Editor, click on the Columns section. You will notice the columns that were configured in the flat file connection manager. Uncheck the RowDelimiter because we won't need that.
On the data flow task, place an OLE DB Destination. Connect the output from the Flat file source to the OLE DB Destination.
On the OLE DB Destination Editor, select the OLE DB Connection manager named SQLServer and set the Name of the table or the view drop down to [dbo].[FlatFile]
On the OLE DB Destination Editor, click on the Mappings section. Since the column names in the flat file connection manager are same as the columns in the database, the mapping will take place automatically. If the names are different, you have to manually map the columns. Click OK.
Now the package is ready. Execute the package to load the fixed-width flat file data into the database.
If you query the table dbo.FlatFile in the database, you will notice the flat file data imported into the database.
This sample should give you an idea about how to import fixed-width flat file into database. It doesn't explain how to handle error logging but this should get you started and help you discover other SSIS related features when you play with packages.
Hope that helps.
SQL Scripts:
CREATE TABLE [dbo].[FlatFile](
[Id] [int] NOT NULL,
[FirstName] [varchar](25) NOT NULL,
[LastName] [varchar](25) NOT NULL,
[Date] [datetime] NOT NULL
)
In the derived column transformation you can use SUBSTRING() function for each of the column.
Example:
Columns DerivedColumn
FirstName SUBSTRING(Data, startFrom, Length);
Here the FirstName has width 25 so if we consider that from the 0th position then in the derived column you should specify it by giving SUBSTRING(Data, 0, 25);
Similarly for other columns.
Very well explained, Siva! Your tutorial and excellent illustrations point out what Microsoft should have made clear
that the width for a fixed length row has to include the Carriage Return and Line Feed (CR & LF) characters (which I figured out because the preview showed the rows were not lining up correctly)
the all important step of defining an extra column to contain those CR & LF characters, even though they won't be imported. I figured this out, too. I would have benefited by finding your answer before I began.
Without those two things, an attempt to run the import will give this error message:
The data conversion for column "Column x" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
I have added in this error text in hopes someone will find this page while searching for the cause of their error. Your turorial is worth finding, even if after the fact!