Export big psql table to JSON - json

how i can export big table to json, but output file is over 1Gb
copy (SELECT json_agg(export_data)::text FROM "table_name" export_data) TO '{{ path_name }}/{{ table_name }}.json' with csv quote E'\t' encoding 'UTF8'
I receive: out of memory, Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes.
Table column:
First
uuid
Second
timestamp
Three
uuid
Four
timestamp
Five
uuid
Six
int4
Seven
text
Eight
uuid
Nine
int4
Ten
uuid
Eleven
uuid
Twelve
varchar(50)
Maybe there is a way to split the output by lines?

Splitting the table into several parts and exporting each part separately :
Let's assume that the size is proportional to the number of rows and decide that one n-th of the total number of rows is a reasonnable size. You can then execute a procedure like the following one and will get n resulting files :
CREATE OR REPLACE PROCEDURE table_export (table_name text, path_name text, n integer) LANGUAGE plpgsql AS
$$
DECLARE
total_count bigint ;
i integer ;
BEGIN
EXECUTE FORMAT
( 'SELECT count(*)
INTO total_count
FROM %I'
, table_name
) ;
FOR i IN 0 .. (n-1) LOOP
EXECUTE FORMAT
(E'COPY ( SELECT json_agg(export_data)::text
FROM %1I export_data
LIMIT %4s
OFFSET (%3s * %4s)
)
TO \'%2s/%1s_%3s.json\' with csv quote E\'\\t\' encoding \'UTF8\''
, table_name
, path_name
, i
, ceil(total_count/n)
) ;
END LOOP ;
END ;
$$ ;

Related

How to know what is the memory consumption of an single record in database table mysql

I am having a DB table with only 35000 of records on export of DB I found that the file size is more than 2 GB .
Even my table schema does not contains any BLOB type of data. Is there any way to identify Table row size in MySQL.
What if we first find out the size of the consumed memory by an empty table with performing the next query:
SELECT table_name AS "Table",
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
FROM information_schema.TABLES
WHERE table_schema = "database_name"
AND table_name = "your_table_name";
Then insert in your table several rows of data, then perform the above query again and calculate the difference and the average value of memory consumption for one record.
Example:
I have a table:
create table returning_players
(
casino_id int not null,
game_id int not null,
returning_players_count int null,
primary key (casino_id, game_id)
);
so empty table consumes:
then insert into table 1000 000 records:
DROP PROCEDURE IF EXISTS query;
CREATE procedure query()
BEGIN
declare counter int(20) default 0;
LABEL:
LOOP
IF counter >= 1000000 THEN
LEAVE LABEL;
END IF;
INSERT INTO returning_players VALUES(counter, 45, 5475);
set counter = counter + 1;
END LOOP LABEL;
END;
CALL query();
and now we have memory consumption:
as you see that we have approximately consumed 32.56 MB per 1000_000 rows. From where we can get the consumption by one row.

Inserting 1 Million records is taking too much time MYSQL

I have a simple table test with three column
id -> primary key (auto increment)
name -> varchar
age -> int
I am using a simple stored procedure to populate 1 Million data in table.
drop PROCEDURE if EXISTS big_data;
CREATE PROCEDURE big_data()
BEGIN
DECLARE i int DEFAULT 1;
WHILE i <= 10000000 DO
INSERT INTO test(id, name, age) VALUES (i, 'name', 34);
SET i = i + 1;
END WHILE;
END;
CALL big_data();
Problem i am facing is inserting 1 million records is taking almost 6 to 7 hours with this simple schema. I want to know how to insert data fast ?
I donot want to use LOAD DATA INFILE query, I have not change any setting in my.ini file.
Simply want to know the reason of too much slow insert?
System Specification
Mysql version 5.5.25a
innodb_version =1.1.8
System = 16GB RAM / 8 cores # 2.70 GHz
Inserting 100 rows at a time is 10 times as fast as 100 1-row INSERTs:
INSERT INTO foo (a, b, c)
VALUES
(1,2,3),
(4,5,6), ...;
LOAD DATA is probably even faster.
try to make your query as single string and then execute it at once
drop PROCEDURE if EXISTS big_data;
CREATE PROCEDURE big_data()
BEGIN
DECLARE total_count TEXT DEFAULT '';
DECLARE i int DEFAULT 1;
WHILE i <= 10000000 DO
SET total_count = total_count + '(' + i+ ', name, 34),';
SET i = i + 1;
END WHILE;
SET total_count = TRIM(TRAILING ',' FROM total_count);
INSERT INTO test(id, name, age) VALUES ' + total_count;
END;

MYSQL Stored Procedure list of string parameter and using IN clause

I have this stored procedure query. I'm using this code in my vb.net in dataset so i need is to pass parameter in my every where clause. or can i pass my whole where clause in this stored procedure from my vb.net.If not how can i do the "where IN clause" because im getting error if I'm call my stored procedure.
Maybe someone can give me some idea how can i handle this problem.
DELIMITER $$
DROP PROCEDURE IF EXISTS `lcs_rdb`.`sp_MissedCallsReport`$$
CREATE DEFINER=`root`#`localhost` PROCEDURE `sp_MissedCallsReport`()
BEGIN
select
cdr_extension_no, cdr_charge_to, COUNT(cdr_call_type_code) as answered,
SUM(cdr_call_type_code = 'BSY') as Busy,
sum(cdr_call_type_code = 'ABN') as abandon,
sum(cdr_call_type_code in ('BSY','ABN')) as total,
coalesce((sum(case cdr_call_type_code when 'ABN' then cdr_duration_number/60000 else 0 end) / sum(cdr_call_type_code = 'ABN')),0) as avg_abandon,
coalesce((sum(cdr_call_type_code in ('BSY','ABN')) /
(sum(cdr_call_type_code in ('BSY','ABN')) + COUNT(cdr_call_type_code))) *100,0) as missed_calls_rate
from cdr_departments
where cdr_site_id = '{0}' AND
cdr_datetime BETWEEN '{1}' AND '{2}'
AND cdr_call_class_id IN({3}) AND cdr_call_type_id IN({4})
AND cdr_extension_id IN({5}) or cdr_route_member_id IN ({6})
GROUP BY cdr_extension_no;
END$$
DELIMITER ;
I suggest you use IN parameters for your stored procedure to use with WHERE clause.
Example:
PROCEDURE `sp_MissedCallsReport`(
IN param_cdr_site_id INT
, IN param_cdr_datetime_min DATETIME
, IN param_cdr_datetime_max DATETIME
, IN param_cdr_call_class_id_csv VARCHAR(1024) -- csv integers
, IN param_cdr_call_type_id_csv VARCHAR(1024) -- csv integers
, IN param_cdr_extension_id_csv VARCHAR(1024) -- csv integers
, IN param_cdr_route_member_id_csv VARCHAR(1024) -- csv integers
)
I suggest CSV form of int valuess as varchar params for parameter placeholder numbers 3,4,5, and 6. It is because, you want to use them with IN as a set to search in them. As we can't pass an array of values as procedure parameters, we can make use of CSV form as an alternative.
And as the input is in CSV format, the IN is not suitable to use for search.
We can use FIND_IN_SET with CSV values.
Example: (for your where clause):
where cdr_site_id = param_cdr_site_id
AND cdr_datetime BETWEEN param_cdr_datetime_min AND param_cdr_datetime_max
AND FIND_IN_SET( cdr_call_class_id, param_cdr_call_class_id_csv )
AND FIND_IN_SET( cdr_call_type_id, param_cdr_call_type_id_csv )
AND ( FIND_IN_SET( cdr_extension_id, param_cdr_extension_id_csv )
or FIND_IN_SET ( cdr_route_member_id, param_cdr_route_member_id_csv ) )
Refer to:
CREATE PROCEDURE Syntax
FIND_IN_SET( str, strlist )

Mysql convert an int to MAC

I have some data that converts which has a 2 columns one column has IP and it contains values which are integers.I used the following function in my mysql query.Is there a function i can use to to convert my mac column which contains integers and data type is bigint to MAC address.
SELECT INET_NTOA(ip_address) AS myip,mymac
FROM table1
Assuming that you have stored the MAC address by suppressing all separators and converting the resulting HEX number into int, the conversion from this int to a human readable MAC address would be:
function int2macaddress($int) {
$hex = base_convert($int, 10, 16);
while (strlen($hex) < 12)
$hex = '0'.$hex;
return strtoupper(implode(':', str_split($hex,2)));
}
The function is taken from http://www.onurguzel.com/storing-mac-address-in-a-mysql-database/
The MySQL version for this function:
delimiter $$
create function itomac (i BIGINT)
returns char(20)
language SQL
begin
declare temp CHAR(20);
set temp = lpad (hex (i), 12, '0');
return concat (left (temp, 2),':',mid(temp,3,2),':',mid(temp,5,2),':',mid(temp,7,2),':',mid(temp,9,2),':',mid(temp,11,2));
end;
$$
delimiter ;
You can also do it directly in SQL, like this:
select
concat (left (b.mh, 2),':',mid(b.mh,3,2),':',mid(b.mh,5,2),':',mid(b.mh,7,2),':',mid(b.mh,9,2),':',mid(b.mh,11,2))
from (
select lpad (hex (a.mac_as_int), 12, '0') as mh
from (
select 1234567890 as mac_as_int
) a
) b
Just use HEX():
For a numeric argument N, HEX() returns a hexadecimal string representation of the value of N treated as a longlong (BIGINT) number.
Therefore, in your case:
SELECT INET_NTOA(ip_address) AS myip, HEX(mymac)
FROM table1
Note that this won't insert byte delimiters, such as colon characters.

mysql how to concat/append binary data type

I am trying to write a stored function which will return BINARY(20) type. I would like to format this return value, by putting string, itn, float values in it. But I couldn't figure it out how can I append binary data.
CREATE FUNCTION `test`() RETURNS binary(20)
BEGIN
declare v binary(20);
set v:= CAST('test' as binary);
set v := v || cast(5 as binary); -- I would like to append 5 as binary but how?
return v;
END
First line writes test as binary, at second line I would like to append 5 as binary. How can I do that? Thank you all..
|| in mysql is logical OR - you want to use CONCAT:
SET v := CONCAT(v, CAST(5 AS BINARY));
instead of CAST(5 AS BINARY), you can use the shorthand BINARY 5.
To concat 5, rather than character 5, use char(5) rather than binary 5.
So:
select concat("test", char(5))
returns a blob of 5 bytes. You can verify this with:
select length(concat("test", char(5))), hex(concat("test", char(5)));
To pad it out to a 20 byte array:
select convert(concat("test", char(5)), binary(20));
In your stored procedure, you just need:
set v:= concat("test", char(5));