Scheduled task - P6 exports - oracle-cloud-infrastructure

I need to automate the exporting of P6 data to Excel on a regular, scheduled basis. How do I set up scheduled exports?
Need information about how to schedule export in P6

Related

SSIS Runs perfectly on a remote server(Greenplum) Datalake but takes 8+ hours

SSIS Package performs the ETL on a remote server (Greenplum envt). It runs fine but takes 8+ hours to complete. Data on the Remote server's interaction tables are massive (~ 1 Billion rows each). Is there a way or any option available on SSIS specifically for the massive amount of data?
Remote Server: Data Lake (Greeplum)
PS: I cannot schedule my query on Data lake itself due to the company policy, but if run the same script on Data lake manually it takes approx 1hr 20 mins to complete the Job.
Thank you!
How does SSIS perform ETL? does it run insert into .. values ...? if so, the performance is expected to be bad because the insert overhead is high.
there are several parameters that could help(reference https://greenplum.org/oltp-workload-performance-improvement-in-greenplum-6/):
gp_enable_global_deadlock_detector
checkpoint_segments
However, the suggested why to do ETL is through gpload/gpfdist (or gpss)

Copy MariaDb database with selected data

I have the shop live database. I need to have opportunity to make copies of this database but with orders data only for last 14 days. Database can be really big but almost 80 percent of data is order, payment and related tables. So we want to copy only last 14 days data of these tables, and all data of another tables. How it can be implemented?
For me it sounds like a classic ETL job. You could use any programming language (like Python) or KNIME that reads from the source db (with an SQL query with a WHERE clause like your_date_column >= CURDATE() - INTERVAL 14 DAY) and write to a sink db.
You can then run it as a (cron) job in Windows/Linux and create each day a backup for the last 14 days but make sure that you also delete/drop the older backups if the size of the backups gets to big.

SSAS Partition -Default or Full

We have a partition let say,
Log-2014,
Log 2015,
Log 2016-Jan-June,
Log 2016-July-Dec,
Log 2017-Jan-June,
Log 2017-July-Dec
Once the import routine started we will insert new data into the Log table and then be using ADMOMD.Net we will process the cube.
XMLA Process Type for Partition:
Log 2017-July-Dec - Process Full
All Other Partitions have Process Default
We are receiving new clients and getting old log data like 2015, 2016. Our import read all the data and insert into Log Table.
Does the "Process Default" work in this case for 2015, 2016 partitions?
Does this log data(2015,2016) will be merged into the correct partition once the cube has been processed?
Does the aggregation is re-calculated after processing the partition with process default type?
Thanks,
Chandru
Process Default will not bring updated members to your historical partition. It reads state of the partition and its components (Data, Index) and does minimal processing to bring partition to a fully processed state. If your partition has been processed at the moment of issuing Process Default command, the command will do nothing.
Process Default is usually used in development process to bring cube changes online.
In your case Process Full or Process Data followed by Process Index will bring data on historical partitions.

SSAS Tabular Refreshing only new data

i have a tabular cube which takes a long time to processing, my idea is to process only new data every hour and a full process during the night, is there a way to do that with SSIS and SQL Job?
Assuming your "new rows" are inserts to your fact table rather than updates or deletes you can do a ProcessAdd operation. ProcessAdd will take a SQL query you provide that returns the new rows and add them to your table in SSAS Tabular.
There are several ways to automate this, all of which could be run from SSIS. This article walks through the options well.
If you have updates and deletes then you need to partition your table inside SSAS. For example partition by week then only reprocess (ProcessData) the partitions where any rows have been inserted/updated/deleted.

SQL Data synchronization between production and reporting server

I have one production server which will store data from day 1 (latest data) up to day 90.
I will move day 91 data to reporting server everyday when new data enters on production server.
My reporting server will keep 365 days of data.
Production will keep 90 days data.
There are still some daily data update in my production for the total 90 days data. How should I synchronize the changes in production data (90 days) with my reporting data ( 365 days) ?
Please advise.
And for the day 91 data import to reporting, is it the best way to use SSIS import wizard?
Thanks in advance.
No don't use the SSIS wizard. You cannot acheive what you want through the wizard.
You'll need to use something to move the data. Ig the two databases are on the same server you don't need SSIS you can just use INSERT/SELECT SQL statements to move the data. If the DB's are on different servers (or expecte to be in the future), then you need to use an ETL tool, of which SSIS may be your best option.
I suggest you store ALL data in your reporting database, i.e. day 1 to 365. Then you do all your reporting from the reporting database instead of trying to stitch the two databases together.
How do you identify day 91? is there a single field you can use to do this in the source?
The simplest approach is a rolling window approach. You delete day 0 to, say, day 20 in your reporting database. Then you load that same window over the top from production.
The other approach is a full CDC approach but if you have a reliable 'age' field that you can use, this won't be necessary.