QGIS 3.22 - When creating a new layer (CSV) the data is being recorded as NULL when it's joined - csv

The issue is seen in the text NULL
It is the same for all the columns.
I am not sure what I did wrong, but I know that it should only say text, and that's why when I try to join it with another layer it just reads all the data as NULL.
I did make a csvt. file with it. I am thinking it's because I just updated my version, and something is different that I don't know about.

Related

Complex fill in the blanks query in MS Access 2010

I have looked and haven't found a method on here to do this. I am assuming my search is skewed and I just missed it, if this is the case, please let me know.
Anywhooo, I have a large and unwieldy report coming out of SAP every day. Because it will often have some strangeness, we import that into an Access database so we can keep an eye on the stuff we need in our department. I am using a combination of 6 fields to create a primary key in Access. The information in those fields is about the only thing consistent I get out of this SAP report, but the remainder of the data can be considered dynamic and can change from day to day. Usually this is a matter of filling in a few blanks, Occasionally this is changing existing data, and on rare occasions, it may involve deleting data out of a handful of fields.
The SAP report is around 130 columns of data, So I'm looking for an efficient way to roll in the changes without overwriting what folks put in there manually.
EDIT:
Here is the way this is used. SAP (for reasons I'm not going to go into) sometimes will have bad data show up in the daily report. We are using Access to track and put the correct data in to something that we can generate much more accurate summaries. What the users put in is to be considered true and accurate.
The transactions we are tracking can take a long time to complete. Most take around 30 days to complete. That's why I will have blank fields on one day, and several of them to be filled in on the next. We might not get any for the next few days and then a bunch more are filled in later. That is the normal flow.
What I have to account for is the odd occasion where a mistake is made early in the process. At a certain point, an error will break SAP's ability to update anything at all in the report we have to use.
I have 3 fields set up that trigger what my users daily work is going to be. There is a logical flow so that user 1 completes what he needs to do and then that record will show up on User 2's report. These fields will also stop the general update process in an exception report if there is a difference in what is coming in from SAP, and what is already in my database.
What I am looking for is some way to systematically fill in blank fields, on existing records in access. I do not want to overwrite if something is in a field, only the null values. I can do this on one field at a time, but each record has about 130 fields. I'm wondering if there is a way I could do this in just 1 query?
Thanks all! I hope the edit makes more sense now
A simple google for "Access SQL update null values" could have yeilded you what you need. But if all you need to do is fill constant values into empty fields then something like:
UPDATE Table SET Table.field1 = VALUE
WHERE Table.field2 is NULL;
Now if this data is different for each record based on; say data from another field, then you may need to write some VBA to build that value/string for you. But otherwise if you are JUST updating null fields to include data, then a simple UPDATE statement will do
EDIT Based on new info:
So if I'm understanding correctly: you have two tables. One table with the blank fields and another table that contains the values you need.
If this is the case, you can use a similar UPDATE statement, but use an inner join to get the data you need from table B to fill in table A
UPDATE TableA INNER JOIN TableB ON TableA.KeyField = TableB.KeyField
SET TableA.NullField = TableB.NotNullField
WHERE TableA.NullField Is NULL;

A way to update data in Oracle

I have a table that I need to update each day. The data comes in a text file every time. I wrote a program that extracts the data from the text file and and writes it in the table, but now I want to modify it to just update the existing data. The data is mostly the same, it might differ only a few things.
I was thinking about MERGE but I don't know very well how I could use this in my program. All the examples that I saw used a second table.
So it would be like creating a second table in which I extract the current data, after which I make the merge into the old table to update the records. I want to avoid creating a second table, so I was wondering if there is any way to do this?
Thanks!

Insert missing rows in CSV of incrementally numbered files generated by directory listing?

I have created a CSV from a set of files in a directory that are numbered incrementally:
img1_1.jpg, img1_2.jpg ... img1_1999.jpg, img1_2000.jpg
The CSV output is like so:
filename, datetime
eg:
img1_1.JPG,2011-05-11 09:16:33.000000000
img1_3.jpg,2011-05-11 10:10:55.000000000
img1_4.jpg,2011-05-11 10:17:31.000000000
img1_6.jpg,2011-05-11 10:58:37.000000000
The problem is, there are a number of files missing in the listing, as some of the files don't exist. As a result, when imported, the actual row number does not match the file number.
Can anyone think of a reasonably efficient way to insert the missing rows so that the row number and filename matches up other than manually inserting rows for the missing ones? (There are over 800 missing rows).
Background
A previous programmer developed an uploader script and did not save the creation time of the mysql record in the database. I figured the easiest way to find the creation time for the majority of the records would be to output a directory listing of all the files and combine them in a spreadsheet.
You exactly need to do what you write in your comment to answer #tadman.
A text parser script to inject the missing lines with e.g. a date/time value that reflects the record is an empty one, i.e. there is no real data is behind it (e.g. date it to 1950-01-01 00:00:00). When it is done, bulk import the CSV.I think this must be the best and most efficient solution.
Also, think about any future insert/delete/update events might occur to your data.
That would possibly break the chain you initially have had, so you might prefer instead, to introduce a numeric field for the jpegs IDs (and index that field), and leave the PK "as is" (auto increment).
In this case you can avoid CSV manipulation, as well as being chained to your AUTO PK (means: you will not get in trouble if a new jpeg arrives with an ID which was previously deleted, or existing ID, etc).
So the solution really depends on how you want to use this table in the future. If you give more details, I am sure the community can come up with even more ideas.
If it's a one-time thing, it might be easiest to open up your csv in a spreadsheet.
If your table above is in sheet1, you could put something like the following in sheet2 (this is openoffice, but there are similar functions for Excel)
pre_filename | filename | datetime
img1_1 | = A2&".JPG" | =OFFSET(Sheet1.$B$1;MATCH(B2;Sheet1.$A$2:$A$4;0);0)
You should be able to select the three cells above and drag them down to however many you need.

Access data type conversion

I am reworking and expanding a somewhat complex database schema that has a small number of tables and queries but they are closely related. The only problem I had with it was that in one of the tables the 2 fields that were relating to another table were using the field name of the record and not the ID of the record.
I changed the referring fields data type from text to number and entered some data. The queries and the reports work fine with 1 exception:
There is one report that uses both referring fields. One of the fields is ok but the other one shows symbols instead of numbers. ( The IDs in my sample entries were 14 and 20 and the symbols shown were a double barred music note /alt code 14/ and the symbol for an end of a paragraph /alt code 20/ ) Investigating further I have found that if I make a query that contains the query source for the report both fields display fine, but if I add another table to that query the second field once again shows symbols instead of numbers.
I have found a workaround of this by converting those fields back to text and the id fields in the other tables to text as well. This text key will probably haunt me later on, so I'd like to make it right before it is too late.
This is all access 2010 btw. The source file was already in 2010 (couldn't open in 2007 even)
Sounds like a corruption issue for sure. I would try adding a new column and run an update query to populate it with the values from the old column (maybe use cint(indexfield)), then delete the old column.
It might also be a good idea to decompile the database. This often helps resolve corruption issues.

Reading hierarchical flat file into SSIS

I have flat file that structured in a hierarchical format that looks something like this:
Area|AreaCode|AreaDescription
Region|RegionCode|RegionDescriptoin
Zone|ZoneCode|ZoneDescription
District|DistrictCode|DistrictDescription
Route|RouteCode|RouateDescription
Record|Name|Address|Ect
RouteFooter
Route|RouteCode|RouateDescription
Record|Name|Address|Ect
RouteFooter
DistrictFooter
District|DistrictCode|DistrictDescription
Route|RouteCode|RouateDescription
Record|Name|Address|Ect
Record|Name|Address|Ect
RouteFooter
Route|RouteCode|RouateDescription
Record|Name|Address|Ect
RouteFooter
DistrictFooter
ZoneFooter
RegionFooter
AreaFooter
I have to bring this into SSIS and consume information about the Record row and also about the header for the current record row. As well as information from several other sources and output a more simple flat file as a result.
I would like to read the flat file above into a structure that each row contains a record with the appropriate header information included.
My question is, what is the best way to do this if it is even possible?
First how do you tell what type of line you are on if you are on say line 3,987,986? How do you tell what is related to what? Is there apossiblity you could get this in a better format? Before spending lots of time (and don't kid yourself, this will take lots of time to set up and test properly) I would kick the file back to the provider and request it in a different format. You won't always get it, but you should at least try.
When I have done this in the past in DTS, the first characters of each line told me which structure the line referred to. I imported all into a staging table with two columns, one for the recordtype data and one for the rest. Then I parsed the rest into the staging tables for the type of record with the correct column structure for that type of record (and any fileds you might need to do the relationships) and then did clean up and then imported to prod tables. AS you also have differnt number of columns I would try that approach (only you may have to manually populate some columns instead of figuring out directly from the file), also give each record an identity filed in the staging tables. this will help you figure out the realtionships I think.