How to make field act like a table - mysql

Sorry but I couldn't find/search for specific "thing"
so.
I have tables
users(id , name , kills , .... ) ;
quests( id , name , description , enemy_id , n_to_kill , en_killed , used_id );
what I want to achieve is for all users to have separate quests table.
I don't understand how it's possible but I think it should be(?)
example:
users:{ 1 , admin , 7 } { 2 , user , 11 } { 3 , john , 0 }
quests:{ 1 , "killall" , "kill all" , 0 , 100 , 0 , ??}
so to save for each user how many quests he finished/killed x out of y
Is it even possible?
Thanks for reading...

Related

how do I extract keys from a JSON object?

My splunk instance queries a database once an hour for data about products, and gets a JSON string back that is structured like this:
{"counts":
{"green":413,
"red":257,
"total":670,
"product_list":
{ "urn:product:1":{
"name":"M & Ms" ,
"total":332 ,
"green":293 ,
"red":39 } ,
"urn:product:2":{
"name":"Christmas Ornaments" ,
"total":2 ,
"green":0 ,
"red":2 } ,
"urn:product:3":{
"name":"Traffic Lights" ,
"total":1 ,
"green":0 ,
"red":1 } ,
"urn:product:4":{
"name":"Stop Signs" ,
"total":2 ,
"green":0 ,
"red":2 },
...
}
}
}
I have a query that alerts when the counts.green drops by 10% over 24 hours:
index=database_catalog source=RedGreenData | head 1
| spath path=counts.green output=green_now
| table green_now
| join host
[| search index=database_catalog source=RedGreenData latest=-1d | head 1 | spath path=counts.green output=green_yesterday
| table green_yesterday]
| where green_yesterday > 0
| eval delta=(green_yesterday - green_now)/green_yesterday * 100
| where delta > 10
While I'm an experienced developer in C, C++, Java, SQL, JavaScript, and several others, I'm fairly new to Splunk's Search Processing Language, and references and tutorials seem pretty light, at least the ones I've found.
My next story is to at least expose all the individual products, and identify which ones have a 10% drop over 24 hours.
I thought a reasonable learning exercise would be to extract the names of all the products, and eventually turn that into a table with name, product code (e.g. urn:product:4), green count today, green count 24 hours ago, and then filter that on a 10% drop for all products where yesterday's count is positive. And I'm stuck. The references to {} are all for a JSON array [], not a JSON object with keys and values.
I'd love to get a table out that looks something like this:
ID
Name
Green
Red
Total
urn:product:1
M & Ms
293
39
332
urn:product:2
Christmas Ornaments
0
2
2
urn:product:3
Traffic Lights
0
1
1
urn:product:4
Stop Signs
0
2
2
How do I do that?
I think produces the output you want:
| spath
| table counts.product_list.*
| transpose
| rex field=column "counts.product_list.(?<ID>[^.]*).(?<fieldname>.*)"
| fields - column
| xyseries ID fieldname "row 1"
| table ID name green red total
use transpose to get the field names as data
use rex to extract the ID and the field name
use xyseries to pivot the data into the output
Here is a run-anywhere example using your source data:
| makeresults
| eval _raw="
{\"counts\":
{\"green\":413,
\"red\":257,
\"total\":670,
\"product_list\":
{ \"urn:product:1\":{
\"name\":\"M & Ms\" ,
\"total\":332 ,
\"green\":293 ,
\"red\":39 } ,
\"urn:product:2\":{
\"name\":\"Christmas Ornaments\" ,
\"total\":2 ,
\"green\":0 ,
\"red\":2 } ,
\"urn:product:3\":{
\"name\":\"Traffic Lights\" ,
\"total\":1 ,
\"green\":0 ,
\"red\":1 } ,
\"urn:product:4\":{
\"name\":\"Stop Signs\" ,
\"total\":2 ,
\"green\":0 ,
\"red\":2 },
}
}
}"
| spath
| table counts.product_list.*
| transpose
| rex field=column "counts.product_list.(?<ID>[^.]*).(?<fieldname>.*)"
| fields - column
| xyseries ID fieldname "row 1"
| table ID name green red total

Delete Only First Zero in Column

ch.value values 00 , 01 , 02 , 03.
It must be 0 , 1 , 2 , 3.
I'am using this code :
Concat_ws('_', ch.prop, Trim(LEADING '0' FROM ch.value))
This code does not work at 00 case , how can i fix it ?
You can use below condition:
Concat_ws('_', ch.prop, case when left(ch.value,1)='0' then substring(ch.value,2) end)

cannot pass more than 100 arguments to a function to json_build_object

cannot pass more than 100 arguments to a function to json_build_object, trying to build json from columns of a table.but it is giving me error that cannot pass more than 100 arguments, but argument count not exceeded 100.
code as follows:
array_agg(json_build_object
(
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0),
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)
I had the exact same problem earlier today. After some research, I found that JSONB results can be concatenated. So you should use JSONB_BUILD_OBJECT instead of JSON_BUILD_OBJECT. Then, split things up so you have multiple JSONB_BUILD_OBJECT calls, which are combined with '||'. You'll also need JSONB_AGG for converting the results into an array.
JSONB_AGG(
JSONB_BUILD_OBJECT (
'QuotaName',quota_name,
'QuotaId',quota_id,
'CellId',COALESCE(cell_id,0),
'ValidPanelistCountOtherMedias',COALESCE(valid_panelist_count,0) ,
'ValidPanelistCountMM',COALESCE(mm_valid_panelist_count,0) ,
'Gender',COALESCE(replace(replace(replace(gender,',',':'),']',''),'[',''),''),
'Occupation',COALESCE(replace(replace(replace(occupation_id,',',':'),']',''),'[',''),''),
'Industry',COALESCE(replace(replace(replace(industry_id,',',':'),']',''),'[',''),''),
'Prefecture',COALESCE(replace(replace(replace(prefecture_id,',',':'),']',''),'[',''),''),
'Age1',COALESCE(replace(replace(replace(age,',',':'),']',''),'[',''),''),
'Age2',COALESCE(replace(replace(replace(age2,',',':'),']',''),'[',''),''),
'MaritalStatus',COALESCE(replace(replace(replace(marital_status,',',':'),']',''),'[',''),''),
'HouseHoldIncome',COALESCE(replace(replace(replace(house_income_id,',',':'),']',''),'[',''),''),
'PersonalIncome',COALESCE(replace(replace(replace(personal_income_id,',',':'),']',''),'[',''),''),
'hasChild',COALESCE(replace(replace(replace(has_child,',',':'),']',''),'[',''),''),
'MediaId',COALESCE(replace(replace(replace(media_id,',',':'),']',''),'[',''),''),
'DeviceUsed',COALESCE(replace(replace(replace(device_type,',',':'),']',''),'[',''),''),
'PanelistStatus','',
'IR1', COALESCE(ir_1,1) ,
'IR2', COALESCE(ir_2,1) ,
'IR3', COALESCE(ir_3,1) ,
'Population',COALESCE(population,0),
'MainSurveySampleHopes', COALESCE(sample_hope_main_survey,0) ,
'ScreeningSurveySampleHopes', COALESCE(sample_hope_main_scr,0),
'ParticipateIntentionMM' ,COALESCE(participate_intention_mm,0) ,
'ParticipateIntentionOthers' ,COALESCE(participate_intention,0) ,
'AcquisitionRate', COALESCE(acquisition_rate,0) ,
'PCEnvironment', COALESCE(case when survey_type >3 then 1 else pc_env end,0) ,
'NetworkEnvironment',COALESCE(case when survey_type >3 then 1 else network_env end,0) ,
'PCEnvironmentMM',COALESCE(case when survey_type >3 then 1 else pc_env_mm end,0),
'NetworkEnvironmentMM',COALESCE(case when survey_type >3 then 1 else network_env_mm end,0) ,
'ControlQuotient',COALESCE(control_quotient,0)/100 ,
'ResponseofSCR24' , COALESCE(res_of_scr_24,0),
'ResponseofSCR48' ,COALESCE(res_of_scr_48,0) ,
'ResponseofSCR72' ,COALESCE(res_of_scr_72,0) ,
'ResponseofSCR168' ,COALESCE(res_of_scr_168,0),
'ResponseofMAIN24' ,COALESCE(res_of_main_24,0) ,
'ResponseofMAIN48' , COALESCE(res_of_main_48,0) ,
'ResponseofMAIN72' , COALESCE(res_of_main_72,0) ,
'ResponseofMAIN168' , COALESCE(res_of_main_168,0),
'ResponseofSCR24MM' ,COALESCE(res_of_scr_24_mm,0) ,
'ResponseofSCR48MM' , COALESCE(res_of_scr_48_mm,0),
'ResponseofSCR72MM' , COALESCE(res_of_scr_72_mm,0) ,
'ResponseofSCR168MM' ,COALESCE(res_of_scr_168_mm,0) ,
'ResponseofMAIN24MM' ,COALESCE(res_of_main_24_mm,0),
'ResponseofMAIN48MM' ,COALESCE(res_of_main_48_mm,0),
'ResponseofMAIN72MM' ,COALESCE(res_of_main_72_mm,0),
'ResponseofMAIN168MM' ,COALESCE(res_of_main_168_mm,0)
) ||
JSONB_BUILD_OBJECT (
'ResponseofMAINIntegrationType',0.9,-- this value is based on answer_estimate_list_details_v3
'ParticipationIntention',COALESCE(participate_intention,0),
'MostRecentParticipation',COALESCE(most_recent_exclusions,0)
)
)
I got this from documentation here - https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSONB-OP-TABLE
Look for "jsonb || jsonb"

How do I multiply data in mySQL database to simulate scale?

Input:
A database consisting of
static tables that do not scale with number of users or time
dynamic tables that grow when users interact with the application (so scale with number of users and time)
a database with real life data for x users
Task:
scale the database to simulate larger number of users
Example:
Tables:
t_user (scale target)
UserId , Name
1 , John
2, Terry
t_post (dynamic)
AuthorId, PostId, TagId
1, 1 , 1
1, 2 , 2
1, 3 , 2
2, 4 , 1
t_tag (static)
TagId, Name
1, C#
2, Java
Desired output with scale factor = 2
t_user
UserId , Name
1 , John
2, Terry
3 , John
4, Terry
t_post (dynamic)
AuthorId, PostId, TagId
1, 1 , 1
1, 2 , 2
1, 3 , 2
2, 4 , 1
1, 5 , 1
1, 6 , 2
1, 7 , 2
2, 8 , 1
t_tag (static)
TagId, Name
1, C#
2, Java
Ofcourse for such a small database this can be done in MySQL but I need a solution that will work for a database with 150+ tables (writing a scaling routine for each is not a solution) and scale factors that will bring a database form 100 to up to 10 000 users.
Does anyone know a dedicated tool or hack that can accomplish this?
Benchmark Factory for Databases looks like it might do what you need it to, or you could give the MySQL Benchmark Tool a try.
I ended up with writing my own script. Below you will find a simplified version (many columns in tables are ommited for clarity). This worked very well. I was able to scale the DB by a factor of 100 quite efficiently. Hope this helps
SET autocommit = 0;
START TRANSACTION;
SET #UMAX = (SELECT MAX(UserID) AS MX FROM t_user);
SET #QSMAX = (SELECT MAX(QuestionSetID) AS MX FROM t_question_set);
SET #QGMAX = (SELECT MAX(QuestionGroupID) AS MX FROM t_question_group);
SET #QMAX = (SELECT MAX(QuestionID) AS MX FROM t_question);
SET #TMAX = (SELECT MAX(TestID) AS MX FROM t_test);
DROP TABLE IF EXISTS t_seq;
CREATE table t_seq AS
(
SELECT
1 S
);
INSERT INTO t_seq (S) VALUES (2),(3),(4),(5),(6),(7),(8),(9),(10);
INSERT INTO `t_user`
(
`UserID`,
`Login`,
`Password`,
)
SELECT
`UserID` + 1000000 + #UMAX * t_seq.S,
concat(if(Login is null, '', Login), `UserID` + 1000000 + #UMAX * t_seq.S),
`Password`,
FROM t_user,
t_seq;
INSERT INTO `t_question_set`(`QuestionSetID`)
SELECT `QuestionSetID` + 1000000 + #QSMAX * t_seq.S
FROM t_question_set,t_seq;
INSERT INTO `t_question_group`(
`QuestionGroupID`,
`QuestionSetID`
)
SELECT
`QuestionGroupID` + 1000000 + #QGMAX * t_seq.S,
`QuestionSetID` + 1000000 + #QSMAX * t_seq.S,
FROM t_question_group,t_seq;
INSERT INTO `t_question`(`QuestionID`, `QuestionGroupID`)
SELECT
`QuestionID` + 1000000 + #QMAX * t_seq.S,
`QuestionGroupID` + 1000000 + #QGMAX * t_seq.S,
FROM t_question, t_seq;
INSERT INTO `t_test`
(
`TestID`,
`QuestionSetID`,
`UserID`,
)
SELECT
`TestID` + 1000000 + #TMAX * t_seq.S,
`QuestionSetID` + 1000000 + #QSMAX * t_seq.S,
`UserID` + 1000000 + #UMAX * t_seq.S,
FROM t_test,t_seq;
INSERT INTO `t_question_answer`(
`QuestionID`,
`TestID`
)
SELECT
`QuestionID` + 1000000 + #QMAX * t_seq.S,
`TestID` + 1000000 + #TMAX * t_seq.S,
FROM t_question_answer,t_seq;
COMMIT;

Null rows in output of query

System:
Windows XP professional
Postgresql 8.4 running on locally stored database (port 5432)
I am using the following code to import a decent size file into a table in my database:
-- Create headers and column variable types
CREATE TABLE special_raw_emissions
(
ORG_ID int ,
COUNTY text ,
MID_UP_STREAM text ,
SITE_ID int ,
PRF_ID int ,
RN text ,
ACCOUNT text ,
CERTIFYING_COMPANY_ORGANIZATION text ,
LEASE_NAME text ,
LEASE_NUMBER text ,
SOURCE text ,
FACILITY_ID int ,
PROFILE text ,
LATITUDE float ,
LONGITUDE float ,
OIL_bbl_yr float ,
CASINGHEAD_GAS_scf_yr float ,
GAS_WELL_GAS_scf_yr float ,
CONDENSATE_bbl_yr float ,
PRODUCED_WATER_bbl_yr float ,
TOTAL_VOC_EMISSION_tpy_EXTRACTED_FROM_SE_TAB float ,
CONTROL_PRESENT boolean ,
CONTROL_TYPE text ,
CONTROL_TYPE_IF_OTHER_DESCRIBE text ,
NOX_CONTROL_EFFICIENCY_PCNT float ,
VOC_CONTROL_EFFICIENCY_PCNT float ,
VENTED_VOLUME_scf_yr float ,
BLOWDOWN_EVENTS int ,
OPERATING_HOURS_hrs_yr float ,
FUEL_CONSUMPTION_MMscf_yr float ,
PILOT_GAS_USED_MMscf_yr float ,
WASTE_GAS_COMBUSTED_MMscf_yr float ,
GAS_TREATED_MMscf_yr float ,
AVERAGE_DAILY_PRODUCTION_RATE_MMscf_day float ,
THROUGHPUT_bbl_yr float ,
SEPARATOR_PRESSURE_psig float ,
SEPARATOR_TEMPERATURE_deg_F float ,
GAS_GRAVITY float ,
MAXIMUM_DAILY_PRODUCTION_bbl_day text ,
SOURCE_ANNUAL_THROUGHPUT_bbl_yr float ,
ANNUAL_THROUGHPUT_bbl_yr float ,
MAXIMUM_DAILY_PRODUCTION_RATE__bbl_day float ,
SERIAL_NUMBER text ,
MAKE text ,
MODEL text ,
FUEL_TYPE text ,
MAXIMUM_DESIGN_CAPACITY text ,
BURN_TYPE text ,
CYCLE text ,
ENGINE_RATING text ,
ASSIST_TYPE text ,
AUTOMATIC_AIR_TO_FUEL_RATIO_CONTROLLER boolean ,
DESTRUCTION_EFFICIENCY text ,
SUBJECT_TO_MACT boolean ,
IF_YES_INDICATE_MAJOR_OR_AREA_SOURCE text ,
SOURCE_TYPE text ,
IF_CONDENSER_WHAT_IS_EFFICIENCY text ,
LIQUID_TYPE text ,
IS_HARC_51C_ACCEPTED_METHOD text ,
WOULD_YOU_LIKE_TO_USE_HARC text ,
SINGLE_OR_MULTIPLE_TANKS text ,
NUMBER_OF_TANKS int ,
CONFIGURATION_TYPE text ,
WORKING_AND_BREATHING_EMISS_CALC_METHOD text ,
FLASH_EMISS_CAL_METHOD text ,
FLASH_IF_OTHER_PLEASE_DESCRIBE text ,
IS_MONITORING_PROGRAM_VOLUNTARY int ,
AIR_ACTUATED_PNEUMATIC_VALVES_GAS int ,
AIR_ACTUATED_PNEUMATIC_VALVES_LIGHT_OIL int ,
CONNECTORS_GAS int ,
CONNECTORS_LIGHT_OIL int ,
FLANGES_GAS int ,
FLANGES_LIGHT_OIL int ,
GAS_ACTUATED_PNEUMATIC_VALVES_GAS int ,
GAS_ACTUATED_PNEUMATIC_VALVES_LIGHT_OIL int ,
IS_COMPLETION_OPTIONAL text ,
NONACTUATED_VALVES_GAS int ,
NONACTUATED_VALVES_LIGHT_OIL int ,
OPEN_ENDED_LINES_GAS int ,
OPEN_ENDED_LINES_LIGHT_OIL int ,
OTHER_GAS int ,
OTHER_LIGHT_OIL int ,
PUMP_SEALS_GAS int ,
PUMP_SEALS_LIGHT_OIL int ,
TOTAL_COMPONENTS int ,
TOTAL_PUMPS_AND_COMPRESSOR_SEALS text ,
TOTAL_UNCONTROLLED_RELIEF_VALVES text ,
GAS_ACTUATED_PNEUMATIC_VALVES_HEAVY_OIL int ,
AIR_ACTUATED_PNEUMATIC_VALVES_HEAVY_OIL int ,
NON_ACTUATED_VALVES_HEAVY_OIL int ,
PUMP_SEALS_HEAVY_OIL int ,
CONNECTORS_HEAVY_OIL int ,
FLANGES_HEAVY_OIL int ,
OPEN_ENDED_LINES_HEAVY_OIL int ,
OTHER_HEAVY_OIL int ,
GAS_ACTUATED_PNEUMATIC_VALVES_WATER_SLASH_OIL text ,
AIR_ACTUATED_PNEUMATIC_VALVES_WATER_SLASH_OIL int ,
NON_ACTUATED_VALVES_WATER_SLASH_OIL int ,
PUMP_SEALS_WATER_SLASH_OIL int ,
CONNECTORS_WATER_SLASH_OIL int ,
FLANGES_WATER_SLASH_OIL int ,
OPEN_ENDED_LINES_WATER_SLASH_OIL int ,
OTHER_WATER_SLASH_OIL text ,
VOC_Gas_Mole_Percent float ,
BENZENE_Gas_Mole_Percent float ,
ETHYBENZENE_Gas_Mole_Percent float ,
n_HEXANE_Gas_Mole_Percent float ,
TOLUENE_Gas_Mole_Percent float ,
XYLENE_S_Gas_Mole_Percent float ,
HAPs_Gas_Mole_Percent float ,
VOC_Liquid_Mole_Percent float ,
BENZENE_Liquid_Mole_Percent float ,
ETHYBENZENE_Liquid_Mole_Percent float ,
n_HEXANE_Liquid_Mole_Percent float ,
TOLUENE_Liquid_Mole_Percent float ,
XYLENE_S_Liquid_Mole_Percent float ,
HAPs_Liquid_Mole_Percent float ,
VOC_Control_Factor_PERC float ,
CH4_Emission_Factor_tonne_Btu float,
Engine_LF float ,
CO2_M1 float ,
CO2_M2 float ,
CH4_M1 float ,
CH4_M2 float ,
Source_Class text ,
Site_class text);
-- Import data into database, note that the delimiter is '~'.
COPY special_raw_emissions
FROM 'C:/PostgreSQL/special results/batch.csv'
WITH DELIMITER AS '~'
CSV;
I was running into some strange error, so I did a QA check and queried this table to see if the data imported correctly, shown below is the query:
\o 'c:/postgresql/special_raw_emissions.csv'
select * from special_raw_emissions;
\o
My query returns all the data that was imported, but randomly there are 'null rows' added. Shown below is an example of 'null row'.
Data input:
155 Wise Midstream 8250 1
155 Wise Midstream 8250 1
4 Wise Upstream 7220 1
4 Wise Upstream 7220 1
95 Wise Midstream 7742 1
95 Wise Midstream 7742 1
7 Clay Upstream 1990 7
7 Cooke Upstream 1414 7
Data with null rows (the example shown below suggests a pattern, this is not the case in the larger output file)
7 Clay Upstream 1990 7
7 Cooke Upstream 1414 7
7 Cooke Upstream 1415 7
7 Cooke Upstream 1416 7
7 Cooke Upstream 3355 7
7 Cooke Upstream 3356 7
7 Cooke Upstream 1418 7
7 Cooke Upstream 3357 7
7 Cooke Upstream 1419 7
7 Cooke Upstream 7489 7
Like I said previously, these null rows are causing my queries to miss certain data and I am losing information.
Any help or guidance is greatly appreciated!
The problem was solved in two steps.
Open the raw data in excel, save the data with appropriate delimiter ('~' in my case) and close the file.
re-import the data into the database.
My speculation is that the raw data, which was created with another psql query, was somehow corrupted or had a line ending character missing. Re-saving in excel fixed the issue and allowed the import to work properly.
I still feel as though the problem is unsolved, I simply found a workaround.