I have a csv imported into my Hyperion v8.3 bqy file. I have some custom columns and a pivot already created. I just want to refresh the data. In the past, I would hit Process Current and it would direct me to my computer and I could select the csv file to update from. Now it will not do that. It doesn't go to my computer at all.
Any ideas?
Eric,
I'm not a power user but I accomplish the same thing by ensuring that there is a file with the same name in the same location from which the original csv was imported and "Processing All". This allows me to update the data and import it into my bqy and automatically update any reporting based on the csv.
Don't know if this will help or not.
Dennis
dennis.van.camp#vanderlande.com
When you import a file as a Section into Hyperion, it maintains a link to the existing file for the exact path and file name. When that link is broken and that section gets a Process or Refresh command, is the only time it will prompt you for a new file. Otherwise, it will refresh the data from the existing.
So, if you want to force it to prompt you for a new file, you have to move or rename the old file.
But, you're looking for the Pivot and Computed columns to refresh. Two things on that ...
Computed Columns: You don't have to re-import the file, if the rest of your data is current. Each column can be refreshed individually by right-click, Modify, then clicking Ok. You don't have to change any of the code; just hit okay instead of Cancel.
Pivot: In the menu, you have to set your pivot options appropriately to update when you Process (when the underlying data is Processed, really), or Manually (when you Process that section).
Related
Context:
For a data pipeline we need to ingest excel spreadsheets directly into foundry (arriving via email). In order to avoid any manual handling error, we'd like to build a small slate app that basically just uploads an excel sheet and automatically appends it to an existing dataset (given schema, headers, etc.).
Unfortunately, there is very little documentation on the "File Import" widget or the API that gets called when drag and dropping a file into a folder.
Idea: Is there a way of uploading a file with slate? Could this file then be added to a dataset, similarly as with the prompt that opens when dropping it into a folder?
You actually don't have to build a Slate app to do this! Datasets that are made up of underlying .csv files support new additions of files directly.
Note: All of the following screenshots are from the dataset preview page.
For example, the following dataset I created from 4 .csv files:
And I can click on the Import button in the top right to add in more files (with the same schema, or not. Depends on if you want to strictly adhere to your applied schema.
If you have already applied a schema, you can also simply Import new files on top of the dataset, but the schemas of the files must exactly match those already present, otherwise your dataset will fail when attempted to be read.
I have an external file that I don't create which I need to import on a rolling basis, most of the column headers/field names have spaces in them. Is there a query I can write to change all of them at once? I'd rather not write a long query to get rid of spaces for each individual field name. The field names are always the same and in the same order in the file, the spaces are in the middle of the field names (ex: "Employee Number").
First of all, "query" refers to an SQL statement (including those viewed in Design View) that retrieve or act on data already in the database. Importing data from an external file is a separate action, not generally called a query. So strictly speaking the answer is "no".
However, Access does have built-in import functionality in Access. I suppose you can call these import "functions" or "actions" or "processes", just not queries. And I'm not being a smart aleck, since much of getting help with applications and code is learning and using the correct terms.
Go to the External Data ribbon (a.k.a. toolbar) along the top of Access.
Click the Import Text File icon (careful not to click the Export Text File icon, since they look similar. Hover the mouse cursor over each button to see the text description of it).
Choose the filename, and pick which import option
As Gustav instructs in his answer, choosing "Link to data source by creating a linked table" is the most efficient solution for external files that don't change format. The linked table (hence the external file) can be re-queried without repeating numerous steps.
Walk through the Import Wizard steps. Play with the options if you need to figure it all out.
In particular, make sure to check "First Row Contains Field Names"
On one of the wizard steps, you can edit the field names to remove the spaces.
On the last step, click the "Save Import Steps" checkbox, specify a name, then click the "Save Import" button
To re-use the previously-saved import steps:
Go to the External Data ribbon (a.k.a. toolbar) along the top of Access.
Click "Saved Imports" button
Choose your saved import settings
Click Run
OR if you created a Linked table
There is no need to "re-import". Instead, a normal Access query can be used to get the data and update one of your normal data tables.
If the path of the external files changes, this can also be updated by right-clicking the linked table and choosing Linked Table Manager (also available on the External Data ribbon). Select the table in the list and also check "Always prompt for new location" before clicking OK. A standard file selection dialogue will be shown for selecting a new filepath.
(Just to be complete, it is also possible to write VBA code in Access to open a file, read and analyze the headers and then import the data according to your custom behavior, but this isn't for you if you'd "rather not write a long..." something to do this.)
I'd rather not write a long query to get rid of spaces for each individual field name.
Maybe not, but there is no smart way to overcome this.
However, don't import the file but link it. Then use the linked file as source in your query. In this, alias the field names as you prefer, and do basic filtering and conversion of data. Then use this query for your further processing.
I have several Access files with data from a group of users that I'm importing into one master file. The tables in the user files are each configured with a Before Change data macro that adds a timestamp each time the user edits the data.
("Data macros" are similar to triggers in SQL Server. They are different from UI macros. For more info, see this page.)
I'd like to import these timestamps into the master file, but since the master file is a clone of the user files, it also contains the same set of data macros. Thus, when I import the data, the timestamps get changed to the time of the import, which is unhelpful.
The only way I can find to edit data macros is by opening each table in Design View and then using the Ribbon to change the settings. There must be an easier way.
I'm using VBA code to perform the merge, and I'm wondering if I can also use it to temporarily disable the data macro feature until the merge has been completed. If there is another way to turn the data macros off for all files/tables at once, even on the users' files/tables, I'd be open to that too.
Disable the code? No. Bypass the code? Yes.
Use a table/field as a flag. Set the status before importing. Check the status of this flag in your event code, and decide if you want to skip the rest of the code. I.e.
If [tblSkipFlag].[SkipFlag] = false
{rest of data macros}
EndIf
Another answer here explains how you can use the (almost-)undocumented SaveAsText and LoadFromText methods with the acTableDataMacro argument to save and retrieve the Data Macros to a text file in XML format. If you were to save the Data Macro XML text for each table, replace ...
<DataMacro Event="BeforeChange"><Statements>
... with ...
<DataMacro Event="BeforeChange"><Statements><Action Name="StopMacro"/>
... and then write the updated macros back to the table then that would presumably have the effect of "short-circuiting" those macros.
I have a problem that has been annoying me for quite some time now and a few days ago I started googling for a solution, but I haven't really gotten anything to work. I've read a little about something called SSIS, but I'm not sure it does what I'm looking for or if there is something else I should research in order to accomplish my goal. This is my problem:
My accounting program produces and updates a .dbf file with information about all vouchers and places it in a folder on my local computer. Our MySQL must continually be updated with this information. So this is what I do twice a day:
I open up the .dbf file in excel
Save it as a .csv.
Close Excel
Open the file in notepad++
Convert the formating to utf8
Save
log in to MySQL
Go to the right table
Upload the .csv
Replace the old data with the new
As this takes quite a bit of time, I feel that there must be better ways to do this. It would be great if I could have this scheduled to be done automatically or if there is some kind of an SQL query that could do this, because then I could use PHP to make a website that I could enter and have the query run when I press a button or something.
So my question is: What is the most simple way to continually get the info from the .dbf file into my SQL server?
There is a way to do your job by shedule with DBF Commander Pro's command-line interface. Use the following command in a *.BAT file:
dbfcommander.exe -edb <dbf_file_name> <server_table_name> <connection_string>
After that, create a shedule for this BAT file using Windows Sheduler.
The only issue remains, that you need to clear the destination table on MySQL database before the export process.
In order to try the export process in app GUI, click 'File -> Export to DBMS'. In the window appears click Build button in order to build the connection string: select MS OLEDB Provider for MySQL Server, then choose your server from the list, provide login and password, select a database, click OK:
In the Export to DBMS window select the destination table you want to import source DBF file to, then click Export. The command line you need you can find at the bottom part of the window.
More info on import and export DBF to a database you can find here. Detailed using of command-line is here.
As you mention of doing in PHP. What is stopping you from doing it there.
You could create one connection handle using a VFPOleDB provider to open the path location of the table, open and read the table. Then have a SECOND connection to your MySQL database open and ready to push the data there.
Then, for each row read from the VFP OleDB connection result set, do whatever special cleansing you need to.
Then, query from the MySQL connection if its an existing entry or not and if an add or update is necessary, then send the data respectively.
Continue for the rest of the records from the VFP result set.
No need to open in Excel, save to CSV format, load yet another tool, etc...
I am currently using SSIS 2008 and am fairly new to it. I have a programming background with some Java, VBA, and VB.NET.
I have a connection to a csv file that contains a list of URLs.
There about a thousand rows in the file and with each row, I want to add the URL to a package variable that will be used to see if the most current link has already been downloaded and updated or not.
I've set up a Foreach Loop Container that is intended to loop through each row of of the csv file.
However, I cannot figure out how to "look at" each row. Once I can do that I know it will be no problem to assign the URL to the variable but I am stuck mid-way. Does anyone have any suggestions?
You want to do something to each row from a given source. That's usually a data flow type of activity. Drop a Data Flow Task onto your Control Flow. Inside that data flow, add a Flat File Source. In the Flat file connection manager, click New and fill out the details for your file. I assume it's just one data element (url) per line. Click OKs and then you should have a working data source.
Great, now all you need to do is that "something" to the data coming in which in your case is "see if the most current link has already been downloaded and updated or not." I'm not sure exactly what that translates to but whatever you attach (look up task, script task, etc) to the output from the Flat File Source will perform that operation for every row flowing through it.