Laravel using model column alias - laravel-5.4

I have a mysql table with dozens of columns. There are set of VARCHAR columns (s1, s2, s...) to store string values, INT columns (n1, n2, n...) to store integer values, and so on...
Depending on the type of the data to be stored in this table, I should pick the columns for the CRUD operations.
That is, if I need to query the table to fetch the records belonging to type 'X', then I should know that the columns being used to store data related to type 'X' are, say, s1, s3, s4 and n3, and so on...
If I want to store data belonging to type 'X', then the table columns to be used (s1, s3, s4 and n3) in insert statement has to be determined as well.
(I hope I am clear enough for you to understand my requirement)

Related

how to map column names in a hive table and replace it with new values in hive table

I have a csv data as below where data comes every 10mins in the following format. I need to insert this data into hive by mapping column names with different column names. (columns don't come in constant order they change their order, we have total 10 columns sometimes we miss many columns like one example below below)
sample csv file :-
1 2 6 4
u f b h
a f r m
q r b c
now while inserting into hive i need to replace column names
for example
1 -> NBR
2 -> GMB
3 -> GSB
4 -> KTC
5 -> VRV
6 -> AMB
now I need to insert into hive table as below
NBR GMB GSB KTC VRV AMB
u f NULL h NULL b
a f NULL m NULL r
can anyone help me with this how to insert this values into hive
Assuming you can get column headers in you source CSV, you will need to map them from source number to their column names.
sed -i 's/1/NBR/g; s/2/GMB/g; s/3/GSB/g; s/4/KTC/g; s/5/VRV/g; s/6/AMB/g;...;...;...;...' input.csv
Since you only get an unknown subset of the total columns in your hive table, you will need to translate your CSV from
NBR,GMB,AMB,KTC
u,f,b,h
a,f,r,m
q,r,b,c
to
NBR,GMB,GSB,KTC,VRV,AMB,...,...,...,...
u,f,null,b,null,h,null,null,null,null
a,f,null,r,null,m,null,null,null,null
q,r,null,b,null,c,null,null,null,null
in order to properly insert them into your table.
From the Apache Wiki:
Values must be provided for every column in the table. The standard SQL syntax that allows the user to insert values into only some columns is not yet supported. To mimic the standard SQL, nulls can be provided for columns the user does not wish to assign a value to.
Standard Syntax:
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...]
Where values_row is:
( value [, value ...] )
where a value is either null or any valid SQL literal
Using LOAD DATA INPATH, even with the tblproperties("skip.header.line.count"="1") set, still requires a valid SQL literal for all columns in the table. This is why youre missing columns.
If you can not get the producer of the CSV to create a file with 1,2,...9,10 columns in order with your table columns and either consecutive commas or a null character in the data, write some kind of script to add missing column names, in the order you need them in, and the required null values in the data.
If you will have header in csv like 1,2,3,4 (as you wrote in the comment), you could use the next syntax:
insert into table (columns where you want to insert) select 1,2,3,4 (columns) from csv_table;
So, if you could know the order of csv columns, you could write easily the insert, naming only the column that you need to populate, no matter the order in the target table.
Before you could run the above insert, you should create a table that reads from csv!

Number Vs Varchar2 in Where Clause for performance

I have a table with STATUS column of VARCHAR2(25) and STATE column of VARCHAR2(2) along with few more columns.
While filtering records from the table, I'm using STATUS column as well as STATE column in my query.
SELECT * FROM TAB WHERE STATUS = 'Active' AND STATE = 'WA';
Since STATUS and STATE columns are VARCHAR2 datatype, I would like to introduce new two columns STATUS_ID and STATUS_ID in the table with datatype as NUMBER. STATUS and STATE values are substituted with NUMERIC value for STATUS_ID and STATE_ID. So that I can use NUMBER column instead of VARCHAR2 column in WHERE clause.
SELECT * FROM TAB WHERE STATUS_ID = 1 AND STATE_ID = 2;
I'm comparing NUMBER vs NUMBER and VARCHAR2 vs VARCHAR2 datatype only. There is no implicit or explicit conversion of datatypes exists in the query.
Will there be performance improvement of having NUMBER datatype instead of VARCHAR2 in WHERE clause in Oracle Database?
I would like to know whether NUMBER datatype will have high performance over VARCHAR2 datatype in WHERE clause. Is it true?
Thanks.
Performance will be the same, since Oracle stores numbers as packed strings (unlike some other databases). Only thing you should consider before choosing format is the operations you are going to use with the value, amount of computing power thats going to be needed to convert or lookup values.

Can I create a mapping from interger values in a column to the text values they represent in sql?

I have a table full of traffic accident data with column headers such as 'Vehicle_Manoeuvre' which contains integers for example 13 represents the vehicle manoeuvre which caused the accident was 'overtaking moving vehicle'.
I know the mappings from integers to text as I have a (quite large) excel file with this data.
An example of what I want to know is percentage of the accidents involved this type of manoeuvre but I don't want to have to open the excel file and find the mappings of integers to text every time I write a query.
I could manually change the integers of all the columns (write query with all the possible mappings of each column, add them as new column, then delete the orginial columns) but this sould take a long time.
Is it possible to create some type of variable (like an array with first column as integers and second column with the mapped text) that SQL could use to understand how text relates to the integers allowing me to write a query below:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre='overtaking moving vehicle';
rather than:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre=13;
even though the data in the table is still in integer form?
You would do this with a Maneeuvres reference table:
create table Manoeuvres (
ManoeuvreId int primary key,
Name varchar(255) unique
);
insert into Manoeuvres(ManoeuvreId, Name)
values (13, 'Overtaking');
You might even have such a table already, if you know that 13 has a special meaning.
Then use a join:
SELECT COUNT(*)
FROM traffictable tt JOIN
Manoeuvres m
ON tt.Vehicle_Manoeuvre = m.ManoeuvreId
WHERE m.name = 'Overtaking';

update sql table current row

Complete noob alert! I need to store a largish set of data fields (480) for each of many devices i am measuring. Each field is a Decimal(8,5). First, is this an unreasonably large table? I have no experience really, so if it is unmanageable, I might start thinking of an alternative storage method.
Right now, I am creating a new row using INSERT, then trying to put the 480 data values in to the new row using UPDATE (in a loop). Currently each UPDATE is overwriting the entire column. How do I specify only to modify the last row? For example, with a table ("magnitude") having columns "id", "field1", "field2",...:
sql UPDATE magnitude SET field1 = 3.14; this modifies the entire "field1" column.
Was trying to do something like:
sql UPDATE magnitude SET field1 = 3.14 WHERE id = MAX(id)
Obviously I am a complete noob. Just trying to get this one thing working and move on... Did look around a lot but can't find a solution. Any help appreciated.
Instead of inserting a row and then updating it with values, you should insert an entire row, with populated values, at once, using the insert command.
I.e.
insert into tTable (column1, column2, ..., column n) values (datum1, datum2, ..., datum n)
Your table's definition should have the ID column with property identity, which means that it will autofill it for you when you insert, i.e. you don't need to specify it.
Re: appropriateness of the schema, I think 480 is a large number of columns. However, this is a straightforward enough example that you could try it and determine empirically if your system is able to give you the performance you need.
If I were doing this myself, I would go for a different solution that has many rows instead of many columns:
Create a table tDevice (ID int, Name nvarchar)
Create a table tData (ID int, Device_ID int, Value decimal(8,5))
-- With a foreign key on Device_ID back to tDevice.ID
Then, to populate:
Insert all your devices in tDevice
Insert one row into tData for every Device / Data combination
-- i.e. 480 x n rows, n being the number of devices
Then, you can query the data you want like so:
select * from tData join tDevice on tDevice.ID = tData.Device_ID

SSIS Lookup by NVarChar(Max) Column

I want to get id from target table by lookup with NVarChar(Max) column in target table and NVarChar(20) column in source table. But raise error Cannot map the lookup column, 'Column1', because the column data type is a binary large object block (BLOB).
In your Lookup transformation, you need to cast the blob (nvarchar(max)) to a non-blob type. In this case, I would assume you need to cast it to nvarchar(20).
You will need to write a query in the lookup transformation and not just select the table.
Assuming the lookup table looks like
LookupTable
--------------
Column0 int
Column1 nvarchar(max)
Column2 nvarchar(500)
You query would look like
SELECT
L.Column0
, CAST(L.Column1 AS nvarchar(20)) AS Column1
, L.Column2
FROM
dbo.LookupTable L
You should now be able to perform a lookup on that column.
you cant:
The join can be a composite join, which means that you can join
multiple columns in the transformation input to columns in the
reference dataset. The transformation supports join columns with any
data type, except for DT_R4, DT_R8, DT_TEXT, DT_NTEXT, or DT_IMAGE
are you sure you are using the component correctly? You usually lookup by ID to get the text.
Can you give more details?